Nov 24 11:59:12 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 11:59:12 crc restorecon[4685]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:12 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:13 crc restorecon[4685]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:59:13 crc restorecon[4685]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 11:59:13 crc kubenswrapper[4930]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:59:13 crc kubenswrapper[4930]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 11:59:13 crc kubenswrapper[4930]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:59:13 crc kubenswrapper[4930]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:59:13 crc kubenswrapper[4930]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 11:59:13 crc kubenswrapper[4930]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.836350 4930 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849069 4930 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849115 4930 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849125 4930 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849135 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849145 4930 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849156 4930 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849165 4930 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849175 4930 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849185 4930 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849194 4930 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849202 4930 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849210 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849218 4930 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849226 4930 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849233 4930 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849241 4930 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849249 4930 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849257 4930 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849264 4930 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849272 4930 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849280 4930 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849288 4930 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849296 4930 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849304 4930 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849312 4930 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849320 4930 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849328 4930 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849335 4930 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849343 4930 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849350 4930 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849358 4930 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849366 4930 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849374 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849392 4930 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849404 4930 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849413 4930 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849422 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849432 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849441 4930 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849452 4930 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849463 4930 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849472 4930 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849480 4930 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849489 4930 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849500 4930 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849508 4930 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849518 4930 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849527 4930 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849560 4930 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849569 4930 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849577 4930 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849587 4930 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849598 4930 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849606 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849617 4930 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849626 4930 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849634 4930 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849644 4930 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849653 4930 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849661 4930 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849670 4930 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849678 4930 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849686 4930 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849695 4930 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849703 4930 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849711 4930 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849720 4930 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849727 4930 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849736 4930 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849744 4930 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.849753 4930 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850859 4930 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850884 4930 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850899 4930 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850911 4930 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850925 4930 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850934 4930 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850946 4930 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850958 4930 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850969 4930 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850979 4930 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850989 4930 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.850999 4930 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851009 4930 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851018 4930 flags.go:64] FLAG: --cgroup-root="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851027 4930 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851036 4930 flags.go:64] FLAG: --client-ca-file="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851045 4930 flags.go:64] FLAG: --cloud-config="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851054 4930 flags.go:64] FLAG: --cloud-provider="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851063 4930 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851076 4930 flags.go:64] FLAG: --cluster-domain="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851085 4930 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851094 4930 flags.go:64] FLAG: --config-dir="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851103 4930 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851113 4930 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851126 4930 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851136 4930 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851145 4930 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851155 4930 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851165 4930 flags.go:64] FLAG: --contention-profiling="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851174 4930 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851183 4930 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851192 4930 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851201 4930 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851212 4930 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851221 4930 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851230 4930 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851239 4930 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851249 4930 flags.go:64] FLAG: --enable-server="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851258 4930 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851270 4930 flags.go:64] FLAG: --event-burst="100" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851279 4930 flags.go:64] FLAG: --event-qps="50" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851288 4930 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851297 4930 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851306 4930 flags.go:64] FLAG: --eviction-hard="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851317 4930 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851326 4930 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851335 4930 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851345 4930 flags.go:64] FLAG: --eviction-soft="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851354 4930 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851364 4930 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851373 4930 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851382 4930 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851390 4930 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851399 4930 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851408 4930 flags.go:64] FLAG: --feature-gates="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851420 4930 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851429 4930 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851439 4930 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851448 4930 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851458 4930 flags.go:64] FLAG: --healthz-port="10248" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851467 4930 flags.go:64] FLAG: --help="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851476 4930 flags.go:64] FLAG: --hostname-override="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851485 4930 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851494 4930 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851503 4930 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851512 4930 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851521 4930 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851530 4930 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851563 4930 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851572 4930 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851581 4930 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851590 4930 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851602 4930 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851611 4930 flags.go:64] FLAG: --kube-reserved="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851620 4930 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851629 4930 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851638 4930 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851647 4930 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851656 4930 flags.go:64] FLAG: --lock-file="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851664 4930 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851674 4930 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851682 4930 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851706 4930 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851720 4930 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851728 4930 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851738 4930 flags.go:64] FLAG: --logging-format="text" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851746 4930 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851756 4930 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851765 4930 flags.go:64] FLAG: --manifest-url="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851774 4930 flags.go:64] FLAG: --manifest-url-header="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851786 4930 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851795 4930 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851807 4930 flags.go:64] FLAG: --max-pods="110" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851816 4930 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851825 4930 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851834 4930 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851843 4930 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851852 4930 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851861 4930 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851871 4930 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851891 4930 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851901 4930 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851910 4930 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851919 4930 flags.go:64] FLAG: --pod-cidr="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851927 4930 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851974 4930 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851987 4930 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.851996 4930 flags.go:64] FLAG: --pods-per-core="0" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852005 4930 flags.go:64] FLAG: --port="10250" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852015 4930 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852024 4930 flags.go:64] FLAG: --provider-id="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852033 4930 flags.go:64] FLAG: --qos-reserved="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852042 4930 flags.go:64] FLAG: --read-only-port="10255" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852051 4930 flags.go:64] FLAG: --register-node="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852060 4930 flags.go:64] FLAG: --register-schedulable="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852069 4930 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852090 4930 flags.go:64] FLAG: --registry-burst="10" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852100 4930 flags.go:64] FLAG: --registry-qps="5" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852109 4930 flags.go:64] FLAG: --reserved-cpus="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852119 4930 flags.go:64] FLAG: --reserved-memory="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852131 4930 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852141 4930 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852156 4930 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852165 4930 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852174 4930 flags.go:64] FLAG: --runonce="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852183 4930 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852192 4930 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852202 4930 flags.go:64] FLAG: --seccomp-default="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852210 4930 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852219 4930 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852228 4930 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852238 4930 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852248 4930 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852257 4930 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852265 4930 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852274 4930 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852283 4930 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852293 4930 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852303 4930 flags.go:64] FLAG: --system-cgroups="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852312 4930 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852326 4930 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852335 4930 flags.go:64] FLAG: --tls-cert-file="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852344 4930 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852356 4930 flags.go:64] FLAG: --tls-min-version="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852365 4930 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852374 4930 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852383 4930 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852392 4930 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852402 4930 flags.go:64] FLAG: --v="2" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852413 4930 flags.go:64] FLAG: --version="false" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852425 4930 flags.go:64] FLAG: --vmodule="" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852435 4930 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.852445 4930 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852706 4930 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852721 4930 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852732 4930 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852740 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852749 4930 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852760 4930 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852770 4930 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852780 4930 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852790 4930 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852798 4930 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852807 4930 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852814 4930 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852822 4930 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852830 4930 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852837 4930 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852845 4930 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852853 4930 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852860 4930 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852869 4930 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852876 4930 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852884 4930 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852892 4930 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852900 4930 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852907 4930 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852918 4930 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852926 4930 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852933 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852941 4930 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852949 4930 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852957 4930 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852968 4930 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852978 4930 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.852988 4930 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853001 4930 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853009 4930 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853017 4930 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853026 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853034 4930 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853043 4930 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853051 4930 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853059 4930 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853067 4930 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853074 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853082 4930 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853090 4930 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853100 4930 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853108 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853116 4930 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853124 4930 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853134 4930 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853144 4930 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853152 4930 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853160 4930 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853167 4930 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853175 4930 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853183 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853193 4930 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853201 4930 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853209 4930 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853216 4930 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853224 4930 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853232 4930 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853240 4930 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853247 4930 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853255 4930 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853270 4930 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853278 4930 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853285 4930 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853294 4930 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853301 4930 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.853309 4930 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.854264 4930 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.863518 4930 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.863578 4930 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863650 4930 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863658 4930 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863663 4930 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863668 4930 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863673 4930 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863678 4930 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863682 4930 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863686 4930 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863690 4930 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863694 4930 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863697 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863701 4930 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863705 4930 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863708 4930 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863712 4930 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863716 4930 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863719 4930 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863723 4930 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863726 4930 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863730 4930 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863734 4930 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863737 4930 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863741 4930 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863745 4930 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863751 4930 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863755 4930 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863759 4930 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863763 4930 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863766 4930 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863770 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863775 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863779 4930 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863782 4930 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863786 4930 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863791 4930 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863795 4930 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863799 4930 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863803 4930 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863808 4930 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863812 4930 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863816 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863821 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863824 4930 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863829 4930 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863832 4930 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863836 4930 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863840 4930 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863843 4930 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863847 4930 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863850 4930 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863853 4930 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863857 4930 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863861 4930 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863865 4930 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863868 4930 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863872 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863876 4930 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863879 4930 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863883 4930 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863886 4930 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863890 4930 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863893 4930 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863897 4930 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863901 4930 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863904 4930 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863908 4930 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863911 4930 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863917 4930 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863921 4930 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863926 4930 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.863931 4930 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.863957 4930 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864090 4930 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864099 4930 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864104 4930 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864108 4930 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864112 4930 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864115 4930 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864119 4930 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864123 4930 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864126 4930 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864130 4930 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864134 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864137 4930 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864141 4930 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864144 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864148 4930 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864152 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864155 4930 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864158 4930 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864162 4930 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864166 4930 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864169 4930 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864172 4930 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864176 4930 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864182 4930 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864189 4930 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864193 4930 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864196 4930 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864201 4930 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864204 4930 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864208 4930 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864212 4930 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864216 4930 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864221 4930 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864226 4930 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864232 4930 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864236 4930 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864240 4930 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864243 4930 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864247 4930 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864251 4930 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864255 4930 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864258 4930 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864261 4930 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864265 4930 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864268 4930 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864272 4930 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864275 4930 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864279 4930 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864282 4930 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864286 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864289 4930 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864293 4930 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864297 4930 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864300 4930 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864304 4930 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864308 4930 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864311 4930 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864316 4930 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864321 4930 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864326 4930 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864331 4930 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864336 4930 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864341 4930 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864347 4930 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864351 4930 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864357 4930 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864361 4930 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864365 4930 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864369 4930 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864374 4930 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:59:13 crc kubenswrapper[4930]: W1124 11:59:13.864379 4930 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.864385 4930 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.864635 4930 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.868395 4930 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.868487 4930 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.869744 4930 server.go:997] "Starting client certificate rotation" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.869761 4930 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.869927 4930 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-08 18:40:42.507882214 +0000 UTC Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.870035 4930 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1086h41m28.637860685s for next certificate rotation Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.902257 4930 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.906250 4930 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.924136 4930 log.go:25] "Validated CRI v1 runtime API" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.967162 4930 log.go:25] "Validated CRI v1 image API" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.969026 4930 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.977194 4930 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-11-54-47-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 11:59:13 crc kubenswrapper[4930]: I1124 11:59:13.977269 4930 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.000889 4930 manager.go:217] Machine: {Timestamp:2025-11-24 11:59:13.997764344 +0000 UTC m=+0.612092314 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7e3330cf-3d22-4119-8ec8-af730100ba56 BootID:26e464ae-360f-4bd3-8823-d8644163564e Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a1:90:b2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a1:90:b2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ac:33:c0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:68:8a:67 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:bd:d1:2f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f3:bd:69 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b2:db:cd:78:25:2f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:31:ae:81:a6:83 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.001184 4930 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.001353 4930 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.006051 4930 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.006316 4930 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.006364 4930 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.006685 4930 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.006702 4930 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.007419 4930 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.007497 4930 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.007746 4930 state_mem.go:36] "Initialized new in-memory state store" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.007858 4930 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.011528 4930 kubelet.go:418] "Attempting to sync node with API server" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.011569 4930 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.011588 4930 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.011604 4930 kubelet.go:324] "Adding apiserver pod source" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.011618 4930 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 11:59:14 crc kubenswrapper[4930]: W1124 11:59:14.017213 4930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.017321 4930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:59:14 crc kubenswrapper[4930]: W1124 11:59:14.017576 4930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.017644 4930 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.017660 4930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.019511 4930 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.021883 4930 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025151 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025194 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025206 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025216 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025263 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025275 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025287 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025314 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025329 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025341 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025358 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.025369 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.028948 4930 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.029799 4930 server.go:1280] "Started kubelet" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.031131 4930 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.031336 4930 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.031472 4930 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.032208 4930 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 11:59:14 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.032699 4930 server.go:460] "Adding debug handlers to kubelet server" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.033711 4930 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.033752 4930 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.033812 4930 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:06:02.38529451 +0000 UTC Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.033834 4930 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.033855 4930 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.033872 4930 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.033864 4930 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.034840 4930 factory.go:55] Registering systemd factory Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.034882 4930 factory.go:221] Registration of the systemd container factory successfully Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.035052 4930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="200ms" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.035342 4930 factory.go:153] Registering CRI-O factory Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.035381 4930 factory.go:221] Registration of the crio container factory successfully Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.035468 4930 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.035500 4930 factory.go:103] Registering Raw factory Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.035522 4930 manager.go:1196] Started watching for new ooms in manager Nov 24 11:59:14 crc kubenswrapper[4930]: W1124 11:59:14.036046 4930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.036158 4930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.036926 4930 manager.go:319] Starting recovery of all containers Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.036828 4930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187aef83f0dcf44a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 11:59:14.029737034 +0000 UTC m=+0.644064994,LastTimestamp:2025-11-24 11:59:14.029737034 +0000 UTC m=+0.644064994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042316 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042416 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042438 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042616 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042637 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042653 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042669 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042685 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042703 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042720 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042737 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042757 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042774 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042796 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042814 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042830 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042893 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042923 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.042941 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043135 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043154 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043171 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043189 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043208 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043248 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043269 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043299 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043327 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043351 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043429 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043451 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043468 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043522 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043562 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043582 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043608 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043626 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043685 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043711 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043761 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043811 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043828 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043845 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043898 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043913 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043929 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043945 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.043972 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044010 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044060 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044077 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044126 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044165 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044214 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044236 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044277 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044310 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044327 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044344 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044361 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044377 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044410 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044428 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.044446 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.048996 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.049069 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.049110 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.049149 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.049178 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.049216 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.049236 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.049321 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.049385 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.049815 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.050009 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.050137 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.050278 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.050406 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.050569 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.050701 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.050808 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.050957 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.051171 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.051304 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.051431 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.051570 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.051692 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.051812 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.052055 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.052177 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.052293 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.052454 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.052606 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.052761 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.052896 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.053029 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.053174 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.053334 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.053441 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.053605 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.053799 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.053941 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.054137 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.054260 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.054574 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.054734 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055000 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055060 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055097 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055123 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055140 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055198 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055244 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055260 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055781 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055838 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055889 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055904 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055918 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055935 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055947 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055959 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055983 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.055996 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056008 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056019 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056062 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056075 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056089 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056102 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056115 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056128 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056141 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056154 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056175 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056189 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056203 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056217 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056229 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056241 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056253 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056266 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056281 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056293 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056307 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056320 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056334 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056348 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056368 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056380 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056393 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056422 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056435 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056446 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056460 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056472 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056487 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056499 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056513 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056526 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056558 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056571 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056584 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056596 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056607 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056619 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056637 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056651 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056664 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056681 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.056694 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059803 4930 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059834 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059871 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059883 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059893 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059903 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059914 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059924 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059935 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059945 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059955 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059970 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059982 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.059993 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060002 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060012 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060022 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060032 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060058 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060073 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060082 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060091 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060100 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060112 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060122 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060135 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060146 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060158 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060175 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060184 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060194 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060228 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060240 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060251 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060261 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060272 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060281 4930 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060291 4930 reconstruct.go:97] "Volume reconstruction finished" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.060298 4930 reconciler.go:26] "Reconciler: start to sync state" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.063257 4930 manager.go:324] Recovery completed Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.071709 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.073239 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.073284 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.073297 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.074219 4930 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.074259 4930 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.074281 4930 state_mem.go:36] "Initialized new in-memory state store" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.081560 4930 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.083244 4930 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.083284 4930 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.083307 4930 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.083355 4930 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 11:59:14 crc kubenswrapper[4930]: W1124 11:59:14.084764 4930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.084863 4930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.106444 4930 policy_none.go:49] "None policy: Start" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.108292 4930 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.108337 4930 state_mem.go:35] "Initializing new in-memory state store" Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.134611 4930 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.167869 4930 manager.go:334] "Starting Device Plugin manager" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.167930 4930 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.167943 4930 server.go:79] "Starting device plugin registration server" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.168414 4930 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.168433 4930 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.168621 4930 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.168812 4930 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.168831 4930 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.178043 4930 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.184690 4930 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.184801 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.186187 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.186237 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.186282 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.186487 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.186779 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.186855 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.187743 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.187778 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.187823 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.187997 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.188126 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.188153 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.188331 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.188367 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.188384 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.189378 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.189413 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.189388 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.189445 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.189458 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.189426 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.189797 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.190134 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.190284 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.191873 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.191921 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.191931 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.192056 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.192147 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.192165 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.192177 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.192487 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.192547 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.193822 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.193907 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.193924 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.194385 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.194439 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.197670 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.197694 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.197697 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.197729 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.197739 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.197703 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.235649 4930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="400ms" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263156 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263221 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263266 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263293 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263348 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263421 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263439 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263454 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263493 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263534 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263575 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263600 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263618 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263637 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.263651 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.269300 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.270596 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.270629 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.270638 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.270661 4930 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.271058 4930 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364614 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364672 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364704 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364722 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364739 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364808 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364808 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364863 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364886 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364882 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364967 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364967 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.364904 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365025 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365042 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365058 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365132 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365080 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365140 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365153 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365142 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365202 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365235 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365161 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365287 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365311 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365331 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365346 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365386 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.365425 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.471278 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.473161 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.473223 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.473239 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.473283 4930 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.473941 4930 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.521902 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.529155 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.552847 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.558934 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.563341 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:14 crc kubenswrapper[4930]: W1124 11:59:14.582014 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-68505621299a9be91ea23ec1a1586de4ef1657446377547a7e0fa4edae70b673 WatchSource:0}: Error finding container 68505621299a9be91ea23ec1a1586de4ef1657446377547a7e0fa4edae70b673: Status 404 returned error can't find the container with id 68505621299a9be91ea23ec1a1586de4ef1657446377547a7e0fa4edae70b673 Nov 24 11:59:14 crc kubenswrapper[4930]: W1124 11:59:14.588294 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9e3733136cd0093e05ce3c9a47253cb53230360d7e6add87c010b580dfe97be1 WatchSource:0}: Error finding container 9e3733136cd0093e05ce3c9a47253cb53230360d7e6add87c010b580dfe97be1: Status 404 returned error can't find the container with id 9e3733136cd0093e05ce3c9a47253cb53230360d7e6add87c010b580dfe97be1 Nov 24 11:59:14 crc kubenswrapper[4930]: W1124 11:59:14.594243 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-e91a6ead9947335b1aea6fbb45fc4b192c518d604223681f705203a013100c70 WatchSource:0}: Error finding container e91a6ead9947335b1aea6fbb45fc4b192c518d604223681f705203a013100c70: Status 404 returned error can't find the container with id e91a6ead9947335b1aea6fbb45fc4b192c518d604223681f705203a013100c70 Nov 24 11:59:14 crc kubenswrapper[4930]: W1124 11:59:14.598772 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-91b5dfb9909192c7cdd35ebca1e6be2b9343b161d153c6d4f36c01a9a6203546 WatchSource:0}: Error finding container 91b5dfb9909192c7cdd35ebca1e6be2b9343b161d153c6d4f36c01a9a6203546: Status 404 returned error can't find the container with id 91b5dfb9909192c7cdd35ebca1e6be2b9343b161d153c6d4f36c01a9a6203546 Nov 24 11:59:14 crc kubenswrapper[4930]: W1124 11:59:14.601461 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9351c12bbe608490a578e1c113158e8f6c6f65881896ec2478f694b1a514d900 WatchSource:0}: Error finding container 9351c12bbe608490a578e1c113158e8f6c6f65881896ec2478f694b1a514d900: Status 404 returned error can't find the container with id 9351c12bbe608490a578e1c113158e8f6c6f65881896ec2478f694b1a514d900 Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.636936 4930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="800ms" Nov 24 11:59:14 crc kubenswrapper[4930]: W1124 11:59:14.829846 4930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.829979 4930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.874379 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.876027 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.876114 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.876137 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:14 crc kubenswrapper[4930]: I1124 11:59:14.876200 4930 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:59:14 crc kubenswrapper[4930]: E1124 11:59:14.876802 4930 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.032180 4930 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.034184 4930 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:15:33.681236926 +0000 UTC Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.034264 4930 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 532h16m18.646975815s for next certificate rotation Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.088035 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9351c12bbe608490a578e1c113158e8f6c6f65881896ec2478f694b1a514d900"} Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.089284 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"91b5dfb9909192c7cdd35ebca1e6be2b9343b161d153c6d4f36c01a9a6203546"} Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.090104 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e91a6ead9947335b1aea6fbb45fc4b192c518d604223681f705203a013100c70"} Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.091015 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9e3733136cd0093e05ce3c9a47253cb53230360d7e6add87c010b580dfe97be1"} Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.091881 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"68505621299a9be91ea23ec1a1586de4ef1657446377547a7e0fa4edae70b673"} Nov 24 11:59:15 crc kubenswrapper[4930]: W1124 11:59:15.297484 4930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:15 crc kubenswrapper[4930]: E1124 11:59:15.297870 4930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:59:15 crc kubenswrapper[4930]: E1124 11:59:15.438908 4930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="1.6s" Nov 24 11:59:15 crc kubenswrapper[4930]: W1124 11:59:15.513435 4930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:15 crc kubenswrapper[4930]: E1124 11:59:15.513517 4930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:59:15 crc kubenswrapper[4930]: W1124 11:59:15.615067 4930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:15 crc kubenswrapper[4930]: E1124 11:59:15.615226 4930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.677622 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.679262 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.679302 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.679311 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:15 crc kubenswrapper[4930]: I1124 11:59:15.679335 4930 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:59:15 crc kubenswrapper[4930]: E1124 11:59:15.679847 4930 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.032419 4930 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.096977 4930 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="16c11e5b4acee6e6cfc3f698bd3b402640f630c42b30f99d864b4fd3aa3b7143" exitCode=0 Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.097149 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.097142 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"16c11e5b4acee6e6cfc3f698bd3b402640f630c42b30f99d864b4fd3aa3b7143"} Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.098260 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.098306 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.098322 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.099604 4930 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712" exitCode=0 Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.099676 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712"} Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.099786 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.101195 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.101260 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.101272 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.102414 4930 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="65749fc086891c98be90e9567512181dbec456e46a0a1ee4757ea96a8baad5f2" exitCode=0 Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.102467 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"65749fc086891c98be90e9567512181dbec456e46a0a1ee4757ea96a8baad5f2"} Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.102516 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.103492 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.103728 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.103792 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.103820 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.105858 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.105909 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.105929 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.106217 4930 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307" exitCode=0 Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.106302 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307"} Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.106341 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.107556 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.107584 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.107595 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.113526 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f"} Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.113602 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9"} Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.113619 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24"} Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.113631 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0"} Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.113653 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.114797 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.114872 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.114897 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.513107 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:16 crc kubenswrapper[4930]: I1124 11:59:16.599198 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.033183 4930 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:17 crc kubenswrapper[4930]: E1124 11:59:17.040362 4930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="3.2s" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.120647 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2"} Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.120717 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd"} Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.120732 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f"} Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.120746 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724"} Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.123329 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"edfe006f31272340fa98b4821ee0dce6d60014bbfc82c2d9d3eb94ba793804b9"} Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.123466 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.125826 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.125856 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.125868 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.132805 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114"} Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.132839 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8"} Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.132854 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8"} Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.132942 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.133889 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.133913 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.133923 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.141812 4930 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="91528709e5abe54b6349f83c284553122ab3f2f227e5152485b13bd8d8dd6ffe" exitCode=0 Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.141947 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"91528709e5abe54b6349f83c284553122ab3f2f227e5152485b13bd8d8dd6ffe"} Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.142012 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.142007 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.143042 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.143061 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.143095 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.143107 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.143229 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.143279 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.280824 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.281696 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.281726 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.281736 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:17 crc kubenswrapper[4930]: I1124 11:59:17.281757 4930 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:59:17 crc kubenswrapper[4930]: E1124 11:59:17.282154 4930 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Nov 24 11:59:17 crc kubenswrapper[4930]: W1124 11:59:17.409032 4930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Nov 24 11:59:17 crc kubenswrapper[4930]: E1124 11:59:17.409113 4930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.146557 4930 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7b3240462390bbc577a0b91d5906fc9612ea7107207334707cd8562a4ec8d1cc" exitCode=0 Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.146599 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7b3240462390bbc577a0b91d5906fc9612ea7107207334707cd8562a4ec8d1cc"} Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.146695 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.147465 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.147500 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.147511 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.150349 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282"} Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.150434 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.150482 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.150511 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.150675 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.150727 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151279 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151312 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151324 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151434 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151461 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151476 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151731 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151737 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151768 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151751 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151783 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:18 crc kubenswrapper[4930]: I1124 11:59:18.151791 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.130925 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.158297 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"30ff470343db2f876da2da31427d66dabeb7cec719bd6c094fb7ec92798997f0"} Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.158339 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6cabd3475a876131cfda4e0beb5e4dd858556f3924fe8892bfd6c302f0e6dbe1"} Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.158350 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9565ba89eec964fb99326c4b26b27f595e6e7626a765f5acfbede4115bc2fb9a"} Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.158362 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f0810530c4b7c47503e337c2f71d0fede8ea870f528d5ce87e2c05b653b87d07"} Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.158367 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.158370 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"913134b6ba565999ee3910f6bcb1149e278ffcfb1297a27d09a13cb0422fb734"} Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.158474 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.158486 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.158553 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.161004 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.161053 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.161071 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.161084 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.161122 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.161135 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.161172 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.161195 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.161211 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.600453 4930 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.600627 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.616839 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.617055 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.618446 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.618477 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.618489 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:19 crc kubenswrapper[4930]: I1124 11:59:19.626815 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.110927 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.159987 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.160094 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.160148 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.160108 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.161262 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.161309 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.161324 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.161386 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.161412 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.161426 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.161778 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.161826 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.161844 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.482337 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.483709 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.483755 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.483771 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:20 crc kubenswrapper[4930]: I1124 11:59:20.483794 4930 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:59:21 crc kubenswrapper[4930]: I1124 11:59:21.162644 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:21 crc kubenswrapper[4930]: I1124 11:59:21.162835 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:21 crc kubenswrapper[4930]: I1124 11:59:21.163643 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:21 crc kubenswrapper[4930]: I1124 11:59:21.163688 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:21 crc kubenswrapper[4930]: I1124 11:59:21.163700 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:21 crc kubenswrapper[4930]: I1124 11:59:21.164206 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:21 crc kubenswrapper[4930]: I1124 11:59:21.164275 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:21 crc kubenswrapper[4930]: I1124 11:59:21.164297 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:22 crc kubenswrapper[4930]: I1124 11:59:22.277916 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 11:59:22 crc kubenswrapper[4930]: I1124 11:59:22.278146 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:22 crc kubenswrapper[4930]: I1124 11:59:22.279376 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:22 crc kubenswrapper[4930]: I1124 11:59:22.279427 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:22 crc kubenswrapper[4930]: I1124 11:59:22.279442 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:22 crc kubenswrapper[4930]: I1124 11:59:22.814437 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:22 crc kubenswrapper[4930]: I1124 11:59:22.814742 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:22 crc kubenswrapper[4930]: I1124 11:59:22.815998 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:22 crc kubenswrapper[4930]: I1124 11:59:22.816054 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:22 crc kubenswrapper[4930]: I1124 11:59:22.816069 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:24 crc kubenswrapper[4930]: E1124 11:59:24.178136 4930 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:59:25 crc kubenswrapper[4930]: I1124 11:59:25.901416 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:25 crc kubenswrapper[4930]: I1124 11:59:25.901708 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:25 crc kubenswrapper[4930]: I1124 11:59:25.903195 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:25 crc kubenswrapper[4930]: I1124 11:59:25.903234 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:25 crc kubenswrapper[4930]: I1124 11:59:25.903313 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:27 crc kubenswrapper[4930]: I1124 11:59:27.402501 4930 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 11:59:27 crc kubenswrapper[4930]: I1124 11:59:27.402954 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 11:59:27 crc kubenswrapper[4930]: I1124 11:59:27.823182 4930 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]log ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]etcd ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/priority-and-fairness-filter ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/start-apiextensions-informers ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/start-apiextensions-controllers ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/crd-informer-synced ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/start-system-namespaces-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 24 11:59:27 crc kubenswrapper[4930]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 24 11:59:27 crc kubenswrapper[4930]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/bootstrap-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/start-kube-aggregator-informers ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/apiservice-registration-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/apiservice-discovery-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]autoregister-completion ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/apiservice-openapi-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 24 11:59:27 crc kubenswrapper[4930]: livez check failed Nov 24 11:59:27 crc kubenswrapper[4930]: I1124 11:59:27.823283 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:59:29 crc kubenswrapper[4930]: I1124 11:59:29.600267 4930 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:59:29 crc kubenswrapper[4930]: I1124 11:59:29.600360 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 11:59:30 crc kubenswrapper[4930]: I1124 11:59:30.134638 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 11:59:30 crc kubenswrapper[4930]: I1124 11:59:30.134877 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:30 crc kubenswrapper[4930]: I1124 11:59:30.136506 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:30 crc kubenswrapper[4930]: I1124 11:59:30.136579 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:30 crc kubenswrapper[4930]: I1124 11:59:30.136598 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:30 crc kubenswrapper[4930]: I1124 11:59:30.147522 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 11:59:30 crc kubenswrapper[4930]: I1124 11:59:30.188742 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:30 crc kubenswrapper[4930]: I1124 11:59:30.189639 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:30 crc kubenswrapper[4930]: I1124 11:59:30.189677 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:30 crc kubenswrapper[4930]: I1124 11:59:30.189686 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:32 crc kubenswrapper[4930]: E1124 11:59:32.402258 4930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.403515 4930 trace.go:236] Trace[216585863]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:59:21.487) (total time: 10916ms): Nov 24 11:59:32 crc kubenswrapper[4930]: Trace[216585863]: ---"Objects listed" error: 10916ms (11:59:32.403) Nov 24 11:59:32 crc kubenswrapper[4930]: Trace[216585863]: [10.916039531s] [10.916039531s] END Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.403578 4930 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.404677 4930 trace.go:236] Trace[1883208320]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:59:18.802) (total time: 13602ms): Nov 24 11:59:32 crc kubenswrapper[4930]: Trace[1883208320]: ---"Objects listed" error: 13602ms (11:59:32.404) Nov 24 11:59:32 crc kubenswrapper[4930]: Trace[1883208320]: [13.6021182s] [13.6021182s] END Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.404711 4930 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.405629 4930 trace.go:236] Trace[883644804]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:59:18.462) (total time: 13943ms): Nov 24 11:59:32 crc kubenswrapper[4930]: Trace[883644804]: ---"Objects listed" error: 13943ms (11:59:32.405) Nov 24 11:59:32 crc kubenswrapper[4930]: Trace[883644804]: [13.943330497s] [13.943330497s] END Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.405658 4930 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.405684 4930 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 11:59:32 crc kubenswrapper[4930]: E1124 11:59:32.406986 4930 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.407117 4930 trace.go:236] Trace[182631739]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:59:17.916) (total time: 14490ms): Nov 24 11:59:32 crc kubenswrapper[4930]: Trace[182631739]: ---"Objects listed" error: 14490ms (11:59:32.406) Nov 24 11:59:32 crc kubenswrapper[4930]: Trace[182631739]: [14.49071096s] [14.49071096s] END Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.407175 4930 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.436856 4930 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58444->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.436909 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58444->192.168.126.11:17697: read: connection reset by peer" Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.818008 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.818761 4930 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.818801 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 11:59:32 crc kubenswrapper[4930]: I1124 11:59:32.822201 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.023427 4930 apiserver.go:52] "Watching apiserver" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.025913 4930 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.026172 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.026547 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.026597 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.026653 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.026674 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.026794 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.026926 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.026927 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.027114 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.027181 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.029785 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.029972 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.030018 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.030072 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.030685 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.031679 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.032046 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.032142 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.036432 4930 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.040668 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.083684 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.098590 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.108764 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.108825 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.108850 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.108875 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.108897 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.108921 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.108942 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.108964 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.108987 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109019 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109039 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109057 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109075 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109108 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109129 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109147 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109170 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109194 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109215 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109239 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109258 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109276 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109295 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109315 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109337 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109365 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109360 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109356 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109389 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109417 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109447 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109504 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109529 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109569 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109592 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109616 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109639 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109661 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109688 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109714 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109734 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109752 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109804 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109854 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109876 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109898 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109919 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109939 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109958 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109980 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110000 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110026 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109378 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110067 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110141 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110171 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110184 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110243 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110344 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110413 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110428 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109374 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109633 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109642 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.109856 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110008 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110609 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110641 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110681 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110048 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110763 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110790 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110825 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110843 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110878 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110903 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110925 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110948 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110968 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110992 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111012 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111067 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111086 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111106 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111126 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111146 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111167 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111187 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111204 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111228 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111247 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111269 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111288 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111308 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111327 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111343 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111363 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111382 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111398 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111416 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111434 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111453 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111470 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111490 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111509 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111531 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111562 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111583 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111607 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111629 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111649 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111668 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111685 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111703 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111720 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111744 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111770 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111789 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111808 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111825 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111844 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111865 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111885 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111904 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111924 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111945 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111967 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111985 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112003 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112022 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112039 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112056 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112075 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112092 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112111 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112130 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112148 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112168 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112187 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112205 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112222 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112243 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112260 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112276 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112294 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112315 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112333 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112358 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112379 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110837 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112397 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110942 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.110963 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112421 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112447 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112470 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112493 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112515 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112546 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112565 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112582 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112602 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112620 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112674 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112695 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112715 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112734 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112751 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112768 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112786 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112803 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112819 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112842 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112859 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112875 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112892 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112912 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112932 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112950 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112966 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112986 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113572 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113600 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113622 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113647 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113756 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113779 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113799 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113819 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113839 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113857 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113875 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113894 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113915 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113936 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113956 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113974 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.113994 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.114015 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.114037 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.114055 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.114076 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.114098 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.114120 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.114140 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115696 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115720 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115741 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115760 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115783 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115804 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115826 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115846 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115865 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115883 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115904 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115923 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115944 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116044 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116071 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116100 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116125 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116156 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116180 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116203 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116250 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116273 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116294 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116321 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116343 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116368 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116387 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116465 4930 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116479 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116491 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116501 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116513 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116523 4930 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116547 4930 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116558 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116569 4930 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116579 4930 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116590 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116605 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116617 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116628 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116639 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116650 4930 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116661 4930 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116671 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116682 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116693 4930 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116705 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116716 4930 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116727 4930 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111070 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111157 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111203 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111206 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111248 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111334 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111347 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111426 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111425 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111437 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111438 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111475 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111573 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111579 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111660 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111703 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111723 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111815 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111922 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.111944 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112074 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112072 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112093 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112105 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112120 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112144 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112197 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112272 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116980 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112331 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112379 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112377 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112360 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.112468 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.114094 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.114786 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.114982 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115268 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115200 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115291 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115312 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115320 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115402 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115512 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115633 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115756 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115781 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.115988 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116018 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116097 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116313 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116579 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116602 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.116967 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.117017 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.117074 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.117148 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.117836 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.118092 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.118353 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.118379 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.118515 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.118774 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.118929 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.118865 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.119239 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.119734 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.119827 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.119836 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.120039 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.120059 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.120184 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.120288 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.120611 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.120680 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.120690 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.120933 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.120973 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.121128 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.121261 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.121340 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.121947 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.121990 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.122069 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.122087 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.122270 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.122389 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.122482 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.122881 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.122968 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.123362 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.123365 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.123462 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.123627 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.123665 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.123847 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.123959 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.124021 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.124290 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.124450 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.124472 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.124632 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.124734 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.124778 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.124965 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.125061 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.125209 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.125516 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.125698 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.126129 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.126138 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.126331 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.126347 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.126473 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.126491 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.126970 4930 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.126654 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.127853 4930 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.128107 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.128620 4930 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.128814 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.128890 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:33.627921001 +0000 UTC m=+20.242248951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.128941 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:33.628918547 +0000 UTC m=+20.243246497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.128981 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:59:33.628969139 +0000 UTC m=+20.243297089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.129420 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.130006 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.130313 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.131353 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.131445 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.132513 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.140532 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.140992 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.141024 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.141040 4930 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.141347 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.141372 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.141387 4930 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.141777 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.141858 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.141886 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.142145 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:33.642099786 +0000 UTC m=+20.256427956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.142166 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.142205 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:33.642181388 +0000 UTC m=+20.256509328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.142596 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.142618 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.143064 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.143167 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.143189 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.143498 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.143618 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.143724 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.143858 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.143873 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.144016 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.144623 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.144646 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.144788 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.144836 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.145106 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.145192 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.145359 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.145524 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.146303 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.146386 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.147301 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.149682 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.149861 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.150003 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.150267 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.151009 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.151042 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.151230 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.151711 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.151850 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.151999 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.152563 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.152585 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.152661 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.152692 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.152947 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.153028 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.153115 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.153351 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.153508 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.153561 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.153609 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.153815 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.154312 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.154738 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.155261 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.157412 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.166075 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.175415 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.185612 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.185897 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.186152 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.190734 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.195434 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.196688 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.198226 4930 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282" exitCode=255 Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.198269 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282"} Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.202132 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.203980 4930 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.204314 4930 scope.go:117] "RemoveContainer" containerID="cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.213718 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217565 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217709 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217852 4930 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217870 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217884 4930 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217898 4930 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217911 4930 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217922 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217933 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217944 4930 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217957 4930 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217969 4930 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217981 4930 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217981 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.217994 4930 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218021 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218037 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218051 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218149 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218216 4930 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218231 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218247 4930 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218332 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218344 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218378 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218391 4930 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218402 4930 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218413 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218473 4930 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218485 4930 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218496 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218507 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218519 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218662 4930 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218675 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218687 4930 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218743 4930 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218891 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218910 4930 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218922 4930 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218933 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218943 4930 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218955 4930 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218982 4930 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.218996 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219006 4930 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219018 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219030 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219065 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219077 4930 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219087 4930 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219098 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219109 4930 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219120 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219130 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219140 4930 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219151 4930 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219162 4930 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219172 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219182 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219193 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219204 4930 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219215 4930 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219226 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219238 4930 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219250 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219260 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219271 4930 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219282 4930 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219294 4930 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219305 4930 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219318 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219329 4930 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219340 4930 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219712 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219726 4930 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219735 4930 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219743 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219752 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219762 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219770 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219803 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219816 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219830 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219848 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219861 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219873 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219891 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219908 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219921 4930 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219932 4930 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219944 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219955 4930 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219966 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219982 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.219993 4930 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220003 4930 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220012 4930 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220024 4930 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220033 4930 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220041 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220050 4930 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220059 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220069 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220077 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220086 4930 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220095 4930 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220108 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220117 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220125 4930 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220133 4930 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220140 4930 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220149 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220157 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220165 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220174 4930 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220184 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220194 4930 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220205 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220221 4930 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220237 4930 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220249 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220261 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220271 4930 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220280 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220291 4930 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220301 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220313 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220323 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.220337 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222440 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222461 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222476 4930 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222497 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222508 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222523 4930 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222571 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222617 4930 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222919 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222972 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222983 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.222999 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.223029 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.223039 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225681 4930 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225719 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225744 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225766 4930 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225781 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225796 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225811 4930 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225829 4930 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225842 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225854 4930 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225866 4930 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225883 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225895 4930 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225909 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225924 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225945 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225960 4930 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225972 4930 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.225989 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226002 4930 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226014 4930 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226027 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226044 4930 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226057 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226069 4930 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226080 4930 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226096 4930 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226108 4930 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226120 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226131 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.226147 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.228041 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.239782 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.251046 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.262413 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.277881 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.289395 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.300816 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.340446 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.349443 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:59:33 crc kubenswrapper[4930]: W1124 11:59:33.355105 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-aa6d5e30c1393eb3722994b7a6dc30e6938975340062c87523c2d455265f17b6 WatchSource:0}: Error finding container aa6d5e30c1393eb3722994b7a6dc30e6938975340062c87523c2d455265f17b6: Status 404 returned error can't find the container with id aa6d5e30c1393eb3722994b7a6dc30e6938975340062c87523c2d455265f17b6 Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.357810 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:59:33 crc kubenswrapper[4930]: W1124 11:59:33.375193 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-dbc88da006f561bb4c8ac36077e2f35894ba075b8823e6e81c391645641a3de6 WatchSource:0}: Error finding container dbc88da006f561bb4c8ac36077e2f35894ba075b8823e6e81c391645641a3de6: Status 404 returned error can't find the container with id dbc88da006f561bb4c8ac36077e2f35894ba075b8823e6e81c391645641a3de6 Nov 24 11:59:33 crc kubenswrapper[4930]: W1124 11:59:33.376078 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-762ece3af349044d1da18522c424c1240d6da853e5fabb3a340553c8e91118ad WatchSource:0}: Error finding container 762ece3af349044d1da18522c424c1240d6da853e5fabb3a340553c8e91118ad: Status 404 returned error can't find the container with id 762ece3af349044d1da18522c424c1240d6da853e5fabb3a340553c8e91118ad Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.630878 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.630999 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.631041 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.631215 4930 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.631307 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:34.631279979 +0000 UTC m=+21.245607949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.631425 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:59:34.631410372 +0000 UTC m=+21.245738332 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.631593 4930 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.631716 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:34.631677049 +0000 UTC m=+21.246004999 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.732135 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.732199 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.732325 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.732344 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.732356 4930 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.732409 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:34.732391624 +0000 UTC m=+21.346719574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.732472 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.732483 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.732492 4930 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:33 crc kubenswrapper[4930]: E1124 11:59:33.732517 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:34.732508597 +0000 UTC m=+21.346836547 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.757057 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-gfn4n"] Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.757453 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gfn4n" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.759124 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.759606 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.761287 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.769196 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.780608 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.790941 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.812964 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.827477 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.843016 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.865359 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.889930 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.933538 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b976b8fb-925e-4ceb-bba5-de69b9bbb46b-hosts-file\") pod \"node-resolver-gfn4n\" (UID: \"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\") " pod="openshift-dns/node-resolver-gfn4n" Nov 24 11:59:33 crc kubenswrapper[4930]: I1124 11:59:33.933654 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45d6r\" (UniqueName: \"kubernetes.io/projected/b976b8fb-925e-4ceb-bba5-de69b9bbb46b-kube-api-access-45d6r\") pod \"node-resolver-gfn4n\" (UID: \"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\") " pod="openshift-dns/node-resolver-gfn4n" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.034123 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b976b8fb-925e-4ceb-bba5-de69b9bbb46b-hosts-file\") pod \"node-resolver-gfn4n\" (UID: \"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\") " pod="openshift-dns/node-resolver-gfn4n" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.034186 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45d6r\" (UniqueName: \"kubernetes.io/projected/b976b8fb-925e-4ceb-bba5-de69b9bbb46b-kube-api-access-45d6r\") pod \"node-resolver-gfn4n\" (UID: \"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\") " pod="openshift-dns/node-resolver-gfn4n" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.034253 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b976b8fb-925e-4ceb-bba5-de69b9bbb46b-hosts-file\") pod \"node-resolver-gfn4n\" (UID: \"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\") " pod="openshift-dns/node-resolver-gfn4n" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.052888 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45d6r\" (UniqueName: \"kubernetes.io/projected/b976b8fb-925e-4ceb-bba5-de69b9bbb46b-kube-api-access-45d6r\") pod \"node-resolver-gfn4n\" (UID: \"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\") " pod="openshift-dns/node-resolver-gfn4n" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.069095 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gfn4n" Nov 24 11:59:34 crc kubenswrapper[4930]: W1124 11:59:34.080945 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb976b8fb_925e_4ceb_bba5_de69b9bbb46b.slice/crio-384e67aaa6383dd9f3058818029daa13a53fba8c42060c75edee22e2c483d9e2 WatchSource:0}: Error finding container 384e67aaa6383dd9f3058818029daa13a53fba8c42060c75edee22e2c483d9e2: Status 404 returned error can't find the container with id 384e67aaa6383dd9f3058818029daa13a53fba8c42060c75edee22e2c483d9e2 Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.087089 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.087878 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.089151 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.089784 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.090814 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.091350 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.091986 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.092974 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.093600 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.094513 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.095455 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.096377 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.096957 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.097467 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.098494 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.099091 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.099568 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.100507 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.101466 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.102072 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.103121 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.103595 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.105149 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.106251 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.107743 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.112377 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.112573 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.113207 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.115222 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.115767 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.116997 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.117586 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.118086 4930 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.118197 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.120452 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.120963 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.121299 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.121968 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.123627 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.124301 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.125218 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.125918 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.127134 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.127799 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.128879 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.129582 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.130782 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.131253 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.132762 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.133098 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.133294 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.134824 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.135315 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.136221 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.136716 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.137285 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.138411 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.139063 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.147742 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.166898 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.187798 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.203601 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gfn4n" event={"ID":"b976b8fb-925e-4ceb-bba5-de69b9bbb46b","Type":"ContainerStarted","Data":"384e67aaa6383dd9f3058818029daa13a53fba8c42060c75edee22e2c483d9e2"} Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.206064 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.206354 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.212074 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2"} Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.213032 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.217330 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564"} Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.217379 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b"} Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.217392 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"762ece3af349044d1da18522c424c1240d6da853e5fabb3a340553c8e91118ad"} Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.220121 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"dbc88da006f561bb4c8ac36077e2f35894ba075b8823e6e81c391645641a3de6"} Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.223897 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8"} Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.223952 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"aa6d5e30c1393eb3722994b7a6dc30e6938975340062c87523c2d455265f17b6"} Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.227103 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.254636 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.275258 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kjhcw"] Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.275612 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.278654 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.278922 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.279243 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.280586 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5lvxv"] Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.280825 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.282229 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b6q2v"] Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.282805 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.290880 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.295690 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.295944 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296136 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296201 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296292 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296333 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296419 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296502 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-c8rb7"] Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296532 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296666 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296686 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296792 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296373 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.296979 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.297124 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.301704 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.310087 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.310394 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.333825 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.352695 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.375507 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.388039 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.402879 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.415641 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.428333 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.437483 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmtq9\" (UniqueName: \"kubernetes.io/projected/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-kube-api-access-gmtq9\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.437701 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-var-lib-cni-multus\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.437785 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9gj6\" (UniqueName: \"kubernetes.io/projected/b3159aca-5e15-4f2c-ae74-e547f4a227f7-kube-api-access-t9gj6\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.437881 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-system-cni-dir\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.437988 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m687j\" (UniqueName: \"kubernetes.io/projected/68c34ffc-f1cd-4828-b83c-22bd0c02f364-kube-api-access-m687j\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.438098 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-var-lib-cni-bin\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.438272 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-run-multus-certs\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.438397 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.438493 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovn-node-metrics-cert\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.438622 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-var-lib-openvswitch\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.438743 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-var-lib-kubelet\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.438794 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.438934 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8835064f-65c7-48cb-8b7d-330e5cce840a-rootfs\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439036 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8835064f-65c7-48cb-8b7d-330e5cce840a-proxy-tls\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439137 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-slash\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439223 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-run-k8s-cni-cncf-io\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439303 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-run-netns\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439379 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8835064f-65c7-48cb-8b7d-330e5cce840a-mcd-auth-proxy-config\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439461 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-script-lib\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439563 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-cnibin\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439648 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-hostroot\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439749 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-ovn\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439815 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-conf-dir\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439895 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-etc-kubernetes\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.439980 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-netns\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440045 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-etc-openvswitch\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440161 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-os-release\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440266 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440363 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-cni-dir\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440479 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-socket-dir-parent\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440595 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-daemon-config\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440677 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-kubelet\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440747 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-env-overrides\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440821 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/68c34ffc-f1cd-4828-b83c-22bd0c02f364-cni-binary-copy\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440892 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-cnibin\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.440966 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441047 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-openvswitch\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441114 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-ovn-kubernetes\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441184 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-bin\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441251 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-config\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441326 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qgn\" (UniqueName: \"kubernetes.io/projected/8835064f-65c7-48cb-8b7d-330e5cce840a-kube-api-access-n7qgn\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441393 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-systemd-units\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441486 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-systemd\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441608 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-system-cni-dir\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441716 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-node-log\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441826 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-cni-binary-copy\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.441934 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-os-release\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.442032 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-log-socket\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.442131 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-netd\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.451582 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.465479 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.476852 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.488899 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.508786 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.524662 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.542969 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543143 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmtq9\" (UniqueName: \"kubernetes.io/projected/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-kube-api-access-gmtq9\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543494 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-var-lib-cni-multus\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543516 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9gj6\" (UniqueName: \"kubernetes.io/projected/b3159aca-5e15-4f2c-ae74-e547f4a227f7-kube-api-access-t9gj6\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543571 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-system-cni-dir\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543587 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m687j\" (UniqueName: \"kubernetes.io/projected/68c34ffc-f1cd-4828-b83c-22bd0c02f364-kube-api-access-m687j\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543604 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543620 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovn-node-metrics-cert\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543639 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-var-lib-cni-bin\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543655 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-run-multus-certs\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543671 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-var-lib-openvswitch\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543685 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-var-lib-kubelet\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543701 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8835064f-65c7-48cb-8b7d-330e5cce840a-rootfs\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543717 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8835064f-65c7-48cb-8b7d-330e5cce840a-proxy-tls\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543731 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-slash\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543746 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-run-k8s-cni-cncf-io\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543763 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8835064f-65c7-48cb-8b7d-330e5cce840a-mcd-auth-proxy-config\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543779 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-script-lib\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543794 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-run-netns\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543875 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-slash\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543903 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-run-multus-certs\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543940 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-var-lib-kubelet\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543963 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8835064f-65c7-48cb-8b7d-330e5cce840a-rootfs\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544027 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-system-cni-dir\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544080 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-run-k8s-cni-cncf-io\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544106 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-var-lib-cni-bin\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544138 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544065 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-run-netns\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543918 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-var-lib-openvswitch\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544335 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-ovn\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544515 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-host-var-lib-cni-multus\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.543822 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-ovn\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544685 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-cnibin\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544700 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-hostroot\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544714 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-conf-dir\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544729 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-etc-kubernetes\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544749 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-netns\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544753 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-hostroot\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544766 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-etc-openvswitch\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544755 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-cnibin\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544784 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-os-release\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544789 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-etc-kubernetes\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544800 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544816 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-cni-dir\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544823 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-etc-openvswitch\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544833 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-netns\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544836 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-socket-dir-parent\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544876 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-socket-dir-parent\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544883 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-daemon-config\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544911 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-kubelet\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544930 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-env-overrides\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544948 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/68c34ffc-f1cd-4828-b83c-22bd0c02f364-cni-binary-copy\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544943 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8835064f-65c7-48cb-8b7d-330e5cce840a-mcd-auth-proxy-config\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544991 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-openvswitch\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544995 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-cni-dir\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544801 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-conf-dir\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544970 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-openvswitch\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.544952 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-script-lib\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.545036 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-ovn-kubernetes\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.545066 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-ovn-kubernetes\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.545090 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-bin\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.545647 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.545664 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/68c34ffc-f1cd-4828-b83c-22bd0c02f364-multus-daemon-config\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.545693 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-bin\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.545706 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/68c34ffc-f1cd-4828-b83c-22bd0c02f364-cni-binary-copy\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.545723 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-config\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546148 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-cnibin\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546176 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546201 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qgn\" (UniqueName: \"kubernetes.io/projected/8835064f-65c7-48cb-8b7d-330e5cce840a-kube-api-access-n7qgn\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546226 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-systemd-units\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546251 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-systemd\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546261 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-config\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546271 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-system-cni-dir\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546263 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-cnibin\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546297 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-node-log\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546322 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-cni-binary-copy\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546343 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-os-release\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546379 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-log-socket\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546400 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-netd\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546484 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-netd\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546017 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-env-overrides\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.545730 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-kubelet\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546526 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-systemd-units\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546550 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-node-log\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546610 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-systemd\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546648 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-system-cni-dir\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.545793 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-os-release\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546714 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/68c34ffc-f1cd-4828-b83c-22bd0c02f364-os-release\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546745 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-log-socket\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.546934 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.547042 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-cni-binary-copy\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.549629 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovn-node-metrics-cert\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.549646 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8835064f-65c7-48cb-8b7d-330e5cce840a-proxy-tls\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.560381 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.561059 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qgn\" (UniqueName: \"kubernetes.io/projected/8835064f-65c7-48cb-8b7d-330e5cce840a-kube-api-access-n7qgn\") pod \"machine-config-daemon-kjhcw\" (UID: \"8835064f-65c7-48cb-8b7d-330e5cce840a\") " pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.562447 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m687j\" (UniqueName: \"kubernetes.io/projected/68c34ffc-f1cd-4828-b83c-22bd0c02f364-kube-api-access-m687j\") pod \"multus-5lvxv\" (UID: \"68c34ffc-f1cd-4828-b83c-22bd0c02f364\") " pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.565016 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmtq9\" (UniqueName: \"kubernetes.io/projected/aee5f87e-22f1-4e8c-8f14-3d792f4d9a08-kube-api-access-gmtq9\") pod \"multus-additional-cni-plugins-c8rb7\" (UID: \"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\") " pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.565630 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9gj6\" (UniqueName: \"kubernetes.io/projected/b3159aca-5e15-4f2c-ae74-e547f4a227f7-kube-api-access-t9gj6\") pod \"ovnkube-node-b6q2v\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.575648 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:34Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.589998 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.602301 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5lvxv" Nov 24 11:59:34 crc kubenswrapper[4930]: W1124 11:59:34.608734 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8835064f_65c7_48cb_8b7d_330e5cce840a.slice/crio-2e64b2ca51df129e9b3b06721bda81f4e3032d17b4a2731eaf4edaec09780f50 WatchSource:0}: Error finding container 2e64b2ca51df129e9b3b06721bda81f4e3032d17b4a2731eaf4edaec09780f50: Status 404 returned error can't find the container with id 2e64b2ca51df129e9b3b06721bda81f4e3032d17b4a2731eaf4edaec09780f50 Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.608859 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.616878 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" Nov 24 11:59:34 crc kubenswrapper[4930]: W1124 11:59:34.644473 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaee5f87e_22f1_4e8c_8f14_3d792f4d9a08.slice/crio-002222e049b7cfa09df958fef462d2b084d41c71d3d0773a4c4276b383708d46 WatchSource:0}: Error finding container 002222e049b7cfa09df958fef462d2b084d41c71d3d0773a4c4276b383708d46: Status 404 returned error can't find the container with id 002222e049b7cfa09df958fef462d2b084d41c71d3d0773a4c4276b383708d46 Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.647669 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.647830 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:59:36.647802325 +0000 UTC m=+23.262130325 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.647945 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.647996 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.648087 4930 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.648136 4930 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.648157 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:36.648143604 +0000 UTC m=+23.262471554 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.648202 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:36.648172405 +0000 UTC m=+23.262500415 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.749269 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.749440 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.749575 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.749610 4930 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.749679 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:36.74966056 +0000 UTC m=+23.363988510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:34 crc kubenswrapper[4930]: I1124 11:59:34.749723 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.749824 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.749841 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.749851 4930 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:34 crc kubenswrapper[4930]: E1124 11:59:34.749889 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:36.749879166 +0000 UTC m=+23.364207116 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.084031 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.084097 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.084134 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:35 crc kubenswrapper[4930]: E1124 11:59:35.084186 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:35 crc kubenswrapper[4930]: E1124 11:59:35.084346 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:35 crc kubenswrapper[4930]: E1124 11:59:35.084478 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.228397 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gfn4n" event={"ID":"b976b8fb-925e-4ceb-bba5-de69b9bbb46b","Type":"ContainerStarted","Data":"7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d"} Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.230517 4930 generic.go:334] "Generic (PLEG): container finished" podID="aee5f87e-22f1-4e8c-8f14-3d792f4d9a08" containerID="35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582" exitCode=0 Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.230606 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" event={"ID":"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08","Type":"ContainerDied","Data":"35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582"} Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.230667 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" event={"ID":"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08","Type":"ContainerStarted","Data":"002222e049b7cfa09df958fef462d2b084d41c71d3d0773a4c4276b383708d46"} Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.232238 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6" exitCode=0 Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.232302 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6"} Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.232320 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"c3d8b9ead05ab679034fea6e6d838be5bf35c0ce97cca7fd53ed732a57d93b4e"} Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.242626 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5lvxv" event={"ID":"68c34ffc-f1cd-4828-b83c-22bd0c02f364","Type":"ContainerStarted","Data":"d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336"} Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.242682 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5lvxv" event={"ID":"68c34ffc-f1cd-4828-b83c-22bd0c02f364","Type":"ContainerStarted","Data":"276ca6a7092e7e0af9fa9cf2a57b98f5177875910878aa05f60588004c70ccc6"} Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.245464 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b"} Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.245516 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103"} Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.245544 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"2e64b2ca51df129e9b3b06721bda81f4e3032d17b4a2731eaf4edaec09780f50"} Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.274272 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.288791 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.312766 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.336795 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.362408 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.390479 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.417446 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.435327 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.464341 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.478138 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.493679 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.509962 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.526899 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.542437 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.557646 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.576022 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.588524 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.610888 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.628122 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.644425 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.655506 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.668448 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.681367 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:35 crc kubenswrapper[4930]: I1124 11:59:35.695294 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.252189 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a"} Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.252578 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85"} Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.252596 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6"} Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.252606 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda"} Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.252616 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc"} Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.254224 4930 generic.go:334] "Generic (PLEG): container finished" podID="aee5f87e-22f1-4e8c-8f14-3d792f4d9a08" containerID="2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23" exitCode=0 Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.254264 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" event={"ID":"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08","Type":"ContainerDied","Data":"2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23"} Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.255803 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262"} Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.269807 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.286403 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.301786 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.315461 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.325703 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.337886 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.351383 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.370189 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.385379 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.397928 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.414694 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.441513 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.465832 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.494329 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.507927 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.518409 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.531376 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.541629 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.552737 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.567581 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.579490 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.591716 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.603331 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.608087 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.614132 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.615487 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.660409 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.672282 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.672672 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.672776 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.672803 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.672907 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:59:40.672876884 +0000 UTC m=+27.287204824 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.672914 4930 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.672973 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:40.672966157 +0000 UTC m=+27.287294107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.672916 4930 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.673171 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:40.673126751 +0000 UTC m=+27.287454701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.684682 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.694602 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.709844 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.725549 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.739511 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.750615 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.762486 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.773634 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.773687 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.773801 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.773802 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.773844 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.773859 4930 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.773819 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.774002 4930 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.773916 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:40.773899587 +0000 UTC m=+27.388227537 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:36 crc kubenswrapper[4930]: E1124 11:59:36.774200 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:40.774163424 +0000 UTC m=+27.388491574 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.775971 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.794999 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.809639 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.824917 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:36 crc kubenswrapper[4930]: I1124 11:59:36.836983 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.083593 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.083682 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:37 crc kubenswrapper[4930]: E1124 11:59:37.083714 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:37 crc kubenswrapper[4930]: E1124 11:59:37.083803 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.083609 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:37 crc kubenswrapper[4930]: E1124 11:59:37.084123 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.261627 4930 generic.go:334] "Generic (PLEG): container finished" podID="aee5f87e-22f1-4e8c-8f14-3d792f4d9a08" containerID="9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d" exitCode=0 Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.261759 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" event={"ID":"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08","Type":"ContainerDied","Data":"9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d"} Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.267622 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e"} Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.290241 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.306576 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.317524 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.329809 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.341703 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.354407 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.365644 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.378610 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.390844 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.408799 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.426909 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.441004 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:37 crc kubenswrapper[4930]: I1124 11:59:37.453236 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.274957 4930 generic.go:334] "Generic (PLEG): container finished" podID="aee5f87e-22f1-4e8c-8f14-3d792f4d9a08" containerID="e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a" exitCode=0 Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.275067 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" event={"ID":"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08","Type":"ContainerDied","Data":"e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a"} Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.294229 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.315976 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.332241 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.347204 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.359089 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.372632 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.383655 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.398505 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.411743 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.425197 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.439997 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.460969 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.481161 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.807414 4930 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.809239 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.809508 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.809528 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.809670 4930 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.815622 4930 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.815925 4930 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.817496 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.817526 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.817537 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.817566 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.817578 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:38Z","lastTransitionTime":"2025-11-24T11:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:38 crc kubenswrapper[4930]: E1124 11:59:38.829471 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.832740 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.832768 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.832778 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.832791 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.832801 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:38Z","lastTransitionTime":"2025-11-24T11:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:38 crc kubenswrapper[4930]: E1124 11:59:38.846411 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.850947 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.850975 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.850989 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.851009 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.851023 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:38Z","lastTransitionTime":"2025-11-24T11:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:38 crc kubenswrapper[4930]: E1124 11:59:38.863475 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.866888 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.866942 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.866958 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.866977 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.866988 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:38Z","lastTransitionTime":"2025-11-24T11:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:38 crc kubenswrapper[4930]: E1124 11:59:38.880628 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.883609 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.883640 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.883650 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.883667 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.883679 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:38Z","lastTransitionTime":"2025-11-24T11:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:38 crc kubenswrapper[4930]: E1124 11:59:38.896244 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:38 crc kubenswrapper[4930]: E1124 11:59:38.896371 4930 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.898216 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.898333 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.898345 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.898366 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:38 crc kubenswrapper[4930]: I1124 11:59:38.898377 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:38Z","lastTransitionTime":"2025-11-24T11:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.000573 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.000613 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.000629 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.000649 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.000662 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:39Z","lastTransitionTime":"2025-11-24T11:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.083989 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.083999 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:39 crc kubenswrapper[4930]: E1124 11:59:39.084123 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:39 crc kubenswrapper[4930]: E1124 11:59:39.084181 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.084016 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:39 crc kubenswrapper[4930]: E1124 11:59:39.084257 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.102589 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.102640 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.102652 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.102668 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.102679 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:39Z","lastTransitionTime":"2025-11-24T11:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.205257 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.205302 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.205312 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.205326 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.205336 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:39Z","lastTransitionTime":"2025-11-24T11:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.286480 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.289589 4930 generic.go:334] "Generic (PLEG): container finished" podID="aee5f87e-22f1-4e8c-8f14-3d792f4d9a08" containerID="03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066" exitCode=0 Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.289622 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" event={"ID":"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08","Type":"ContainerDied","Data":"03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.306329 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.308257 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.308602 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.308865 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.309092 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.309255 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:39Z","lastTransitionTime":"2025-11-24T11:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.319658 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.331797 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.344511 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.360839 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.378875 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.395169 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.407804 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.411446 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.411480 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.411489 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.411505 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.411516 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:39Z","lastTransitionTime":"2025-11-24T11:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.418781 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.431232 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.443421 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.452138 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.465263 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.514610 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.514661 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.514676 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.514699 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.514717 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:39Z","lastTransitionTime":"2025-11-24T11:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.550700 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8mhdf"] Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.551100 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.556705 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.556945 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.557243 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.557313 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.577712 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.587691 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.595961 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.598049 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef6ac963-b7db-4c43-891c-d8eb105e566a-serviceca\") pod \"node-ca-8mhdf\" (UID: \"ef6ac963-b7db-4c43-891c-d8eb105e566a\") " pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.598106 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfpc4\" (UniqueName: \"kubernetes.io/projected/ef6ac963-b7db-4c43-891c-d8eb105e566a-kube-api-access-pfpc4\") pod \"node-ca-8mhdf\" (UID: \"ef6ac963-b7db-4c43-891c-d8eb105e566a\") " pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.598124 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef6ac963-b7db-4c43-891c-d8eb105e566a-host\") pod \"node-ca-8mhdf\" (UID: \"ef6ac963-b7db-4c43-891c-d8eb105e566a\") " pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.606443 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.617124 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.617158 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.617171 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.617186 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.617195 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:39Z","lastTransitionTime":"2025-11-24T11:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.617183 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.629193 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.640633 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.652012 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.664582 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.676855 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.693670 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.699241 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfpc4\" (UniqueName: \"kubernetes.io/projected/ef6ac963-b7db-4c43-891c-d8eb105e566a-kube-api-access-pfpc4\") pod \"node-ca-8mhdf\" (UID: \"ef6ac963-b7db-4c43-891c-d8eb105e566a\") " pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.699285 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef6ac963-b7db-4c43-891c-d8eb105e566a-host\") pod \"node-ca-8mhdf\" (UID: \"ef6ac963-b7db-4c43-891c-d8eb105e566a\") " pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.699365 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef6ac963-b7db-4c43-891c-d8eb105e566a-serviceca\") pod \"node-ca-8mhdf\" (UID: \"ef6ac963-b7db-4c43-891c-d8eb105e566a\") " pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.699471 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef6ac963-b7db-4c43-891c-d8eb105e566a-host\") pod \"node-ca-8mhdf\" (UID: \"ef6ac963-b7db-4c43-891c-d8eb105e566a\") " pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.700606 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef6ac963-b7db-4c43-891c-d8eb105e566a-serviceca\") pod \"node-ca-8mhdf\" (UID: \"ef6ac963-b7db-4c43-891c-d8eb105e566a\") " pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.710430 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.719050 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfpc4\" (UniqueName: \"kubernetes.io/projected/ef6ac963-b7db-4c43-891c-d8eb105e566a-kube-api-access-pfpc4\") pod \"node-ca-8mhdf\" (UID: \"ef6ac963-b7db-4c43-891c-d8eb105e566a\") " pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.719768 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.719800 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.719809 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.719825 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.719843 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:39Z","lastTransitionTime":"2025-11-24T11:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.723630 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.736182 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.822594 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.822637 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.822649 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.822665 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.822675 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:39Z","lastTransitionTime":"2025-11-24T11:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.863831 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8mhdf" Nov 24 11:59:39 crc kubenswrapper[4930]: W1124 11:59:39.879601 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef6ac963_b7db_4c43_891c_d8eb105e566a.slice/crio-3fdf9cd58e9cf46b318cfdfc6c0a6e5004195b512e6a19b1ff4143fe0f4194ed WatchSource:0}: Error finding container 3fdf9cd58e9cf46b318cfdfc6c0a6e5004195b512e6a19b1ff4143fe0f4194ed: Status 404 returned error can't find the container with id 3fdf9cd58e9cf46b318cfdfc6c0a6e5004195b512e6a19b1ff4143fe0f4194ed Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.925346 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.925478 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.925505 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.925522 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:39 crc kubenswrapper[4930]: I1124 11:59:39.925532 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:39Z","lastTransitionTime":"2025-11-24T11:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.027651 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.027691 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.027702 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.027717 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.027727 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:40Z","lastTransitionTime":"2025-11-24T11:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.130290 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.130330 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.130338 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.130352 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.130362 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:40Z","lastTransitionTime":"2025-11-24T11:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.232270 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.232299 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.232308 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.232322 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.232331 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:40Z","lastTransitionTime":"2025-11-24T11:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.299092 4930 generic.go:334] "Generic (PLEG): container finished" podID="aee5f87e-22f1-4e8c-8f14-3d792f4d9a08" containerID="f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2" exitCode=0 Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.299154 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" event={"ID":"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08","Type":"ContainerDied","Data":"f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.300047 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8mhdf" event={"ID":"ef6ac963-b7db-4c43-891c-d8eb105e566a","Type":"ContainerStarted","Data":"3fdf9cd58e9cf46b318cfdfc6c0a6e5004195b512e6a19b1ff4143fe0f4194ed"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.311223 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.322400 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.335008 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.335047 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.335061 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.335078 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.335088 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:40Z","lastTransitionTime":"2025-11-24T11:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.336750 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.348562 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.360385 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.374801 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.391105 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.409327 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.422718 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.434860 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.437704 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.437736 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.437749 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.437763 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.437773 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:40Z","lastTransitionTime":"2025-11-24T11:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.448628 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.466101 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.478093 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.490440 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.539805 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.539843 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.539854 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.539869 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.539880 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:40Z","lastTransitionTime":"2025-11-24T11:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.643145 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.643204 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.643221 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.643245 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.643262 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:40Z","lastTransitionTime":"2025-11-24T11:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.709090 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.709265 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.709327 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.709494 4930 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.709602 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:59:48.709538058 +0000 UTC m=+35.323866058 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.709621 4930 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.709721 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:48.70962891 +0000 UTC m=+35.323956900 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.709785 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:48.709739133 +0000 UTC m=+35.324067123 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.746140 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.746171 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.746184 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.746202 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.746214 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:40Z","lastTransitionTime":"2025-11-24T11:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.810766 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.810874 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.811035 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.811080 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.811097 4930 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.811134 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.811175 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:48.811149196 +0000 UTC m=+35.425477226 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.811183 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.811213 4930 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:40 crc kubenswrapper[4930]: E1124 11:59:40.811329 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:48.81129894 +0000 UTC m=+35.425626950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.848464 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.848499 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.848510 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.848526 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.848542 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:40Z","lastTransitionTime":"2025-11-24T11:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.951332 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.951380 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.951392 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.951411 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:40 crc kubenswrapper[4930]: I1124 11:59:40.951425 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:40Z","lastTransitionTime":"2025-11-24T11:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.053487 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.053526 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.053538 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.053572 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.053585 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:41Z","lastTransitionTime":"2025-11-24T11:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.083743 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.083829 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:41 crc kubenswrapper[4930]: E1124 11:59:41.083880 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.083743 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:41 crc kubenswrapper[4930]: E1124 11:59:41.084014 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:41 crc kubenswrapper[4930]: E1124 11:59:41.084155 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.156110 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.156135 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.156143 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.156157 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.156171 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:41Z","lastTransitionTime":"2025-11-24T11:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.258138 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.258176 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.258185 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.258202 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.258211 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:41Z","lastTransitionTime":"2025-11-24T11:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.309603 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.309854 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.320431 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" event={"ID":"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08","Type":"ContainerStarted","Data":"5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.322869 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8mhdf" event={"ID":"ef6ac963-b7db-4c43-891c-d8eb105e566a","Type":"ContainerStarted","Data":"9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.330460 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.343667 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.346092 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.360803 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.360845 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.360894 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.360969 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.361072 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:41Z","lastTransitionTime":"2025-11-24T11:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.370037 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.388075 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.403516 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.417177 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.431705 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.444513 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.456049 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.464430 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.464472 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.464480 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.464495 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.464506 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:41Z","lastTransitionTime":"2025-11-24T11:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.469172 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.480996 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.492881 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.507860 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.519459 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.533090 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.543871 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.555829 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.565694 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.566020 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.566045 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.566056 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.566068 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.566077 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:41Z","lastTransitionTime":"2025-11-24T11:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.578585 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.589287 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.606485 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.624241 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.639033 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.651452 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.664271 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.668173 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.668218 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.668229 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.668245 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.668258 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:41Z","lastTransitionTime":"2025-11-24T11:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.676145 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.688330 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.699948 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.770705 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.770755 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.770814 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.770828 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.770837 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:41Z","lastTransitionTime":"2025-11-24T11:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.872780 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.872829 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.872840 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.872856 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.872867 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:41Z","lastTransitionTime":"2025-11-24T11:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.975205 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.975251 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.975261 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.975275 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:41 crc kubenswrapper[4930]: I1124 11:59:41.975286 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:41Z","lastTransitionTime":"2025-11-24T11:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.077779 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.077823 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.077837 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.077853 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.077864 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:42Z","lastTransitionTime":"2025-11-24T11:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.182690 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.182722 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.182731 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.182744 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.182753 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:42Z","lastTransitionTime":"2025-11-24T11:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.285265 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.285344 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.285365 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.285388 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.285405 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:42Z","lastTransitionTime":"2025-11-24T11:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.327216 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.328955 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.357646 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.375313 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.387204 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.388268 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.388315 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.388327 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.388344 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.388357 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:42Z","lastTransitionTime":"2025-11-24T11:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.397527 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.411133 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.422807 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.431237 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.445368 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.458075 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.470136 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.486170 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.489952 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.489996 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.490006 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.490024 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.490035 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:42Z","lastTransitionTime":"2025-11-24T11:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.498918 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.511834 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.521928 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.532974 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:42Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.592476 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.592512 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.592521 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.592533 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.592559 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:42Z","lastTransitionTime":"2025-11-24T11:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.694836 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.694880 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.694892 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.694909 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.694924 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:42Z","lastTransitionTime":"2025-11-24T11:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.797116 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.797168 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.797183 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.797201 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.797211 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:42Z","lastTransitionTime":"2025-11-24T11:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.899754 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.899802 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.899813 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.899831 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:42 crc kubenswrapper[4930]: I1124 11:59:42.899843 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:42Z","lastTransitionTime":"2025-11-24T11:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.003201 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.003256 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.003272 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.003297 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.003314 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:43Z","lastTransitionTime":"2025-11-24T11:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.083645 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.083646 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:43 crc kubenswrapper[4930]: E1124 11:59:43.083788 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.083645 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:43 crc kubenswrapper[4930]: E1124 11:59:43.083848 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:43 crc kubenswrapper[4930]: E1124 11:59:43.083919 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.105233 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.105275 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.105286 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.105302 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.105315 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:43Z","lastTransitionTime":"2025-11-24T11:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.207959 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.207997 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.208007 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.208022 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.208031 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:43Z","lastTransitionTime":"2025-11-24T11:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.310349 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.310403 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.310413 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.310429 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.310441 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:43Z","lastTransitionTime":"2025-11-24T11:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.329685 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.413100 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.413140 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.413151 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.413166 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.413177 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:43Z","lastTransitionTime":"2025-11-24T11:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.515855 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.515892 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.515900 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.515912 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.515922 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:43Z","lastTransitionTime":"2025-11-24T11:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.618457 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.618500 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.618512 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.618528 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.618562 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:43Z","lastTransitionTime":"2025-11-24T11:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.721914 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.721979 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.721998 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.722024 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.722039 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:43Z","lastTransitionTime":"2025-11-24T11:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.823992 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.824039 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.824052 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.824067 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.824076 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:43Z","lastTransitionTime":"2025-11-24T11:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.926875 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.926941 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.926959 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.926984 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:43 crc kubenswrapper[4930]: I1124 11:59:43.927002 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:43Z","lastTransitionTime":"2025-11-24T11:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.029947 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.029983 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.029991 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.030004 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.030012 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:44Z","lastTransitionTime":"2025-11-24T11:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.100920 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.113685 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.126530 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.131962 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.132031 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.132044 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.132090 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.132106 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:44Z","lastTransitionTime":"2025-11-24T11:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.141212 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.157126 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.169630 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.185662 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.227626 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.233520 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.233566 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.233578 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.233593 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.233603 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:44Z","lastTransitionTime":"2025-11-24T11:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.241709 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.261866 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.277317 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.289137 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.302584 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.314049 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.335076 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/0.log" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.335106 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.335781 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.335883 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.336000 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.336059 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:44Z","lastTransitionTime":"2025-11-24T11:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.337960 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640" exitCode=1 Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.337997 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.338514 4930 scope.go:117] "RemoveContainer" containerID="e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.351127 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.367628 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.386489 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:43Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:43.575879 6239 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:59:43.576387 6239 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:43.576429 6239 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:59:43.576458 6239 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:43.576517 6239 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 11:59:43.576533 6239 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 11:59:43.576571 6239 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 11:59:43.576572 6239 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:59:43.576606 6239 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 11:59:43.576650 6239 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 11:59:43.576658 6239 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:59:43.576694 6239 factory.go:656] Stopping watch factory\\\\nI1124 11:59:43.576718 6239 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:43.576728 6239 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.399196 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.413281 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.424781 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.436928 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.438999 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.439045 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.439059 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.439076 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.439088 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:44Z","lastTransitionTime":"2025-11-24T11:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.450958 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.463699 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.474835 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.487109 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.498875 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.507243 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.515470 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.541334 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.541371 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.541383 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.541398 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.541410 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:44Z","lastTransitionTime":"2025-11-24T11:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.643258 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.643294 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.643303 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.643318 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.643328 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:44Z","lastTransitionTime":"2025-11-24T11:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.745758 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.745801 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.745814 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.745829 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.745841 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:44Z","lastTransitionTime":"2025-11-24T11:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.849100 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.849162 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.849184 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.849230 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.849265 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:44Z","lastTransitionTime":"2025-11-24T11:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.951713 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.951796 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.951806 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.951826 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:44 crc kubenswrapper[4930]: I1124 11:59:44.951842 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:44Z","lastTransitionTime":"2025-11-24T11:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.054826 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.054890 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.054902 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.054924 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.054938 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:45Z","lastTransitionTime":"2025-11-24T11:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.084178 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.084269 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.084331 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:45 crc kubenswrapper[4930]: E1124 11:59:45.084408 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:45 crc kubenswrapper[4930]: E1124 11:59:45.084357 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:45 crc kubenswrapper[4930]: E1124 11:59:45.084602 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.158032 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.158086 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.158098 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.158119 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.158133 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:45Z","lastTransitionTime":"2025-11-24T11:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.260826 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.260877 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.260889 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.260910 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.260923 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:45Z","lastTransitionTime":"2025-11-24T11:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.344900 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/0.log" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.350113 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.350224 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.362837 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.362872 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.362881 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.362898 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.362911 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:45Z","lastTransitionTime":"2025-11-24T11:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.365106 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.376620 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.386151 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.395837 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.406574 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.419651 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.430203 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.440400 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.451228 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.462737 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.465063 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.465101 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.465112 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.465127 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.465137 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:45Z","lastTransitionTime":"2025-11-24T11:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.490332 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:43Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:43.575879 6239 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:59:43.576387 6239 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:43.576429 6239 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:59:43.576458 6239 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:43.576517 6239 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 11:59:43.576533 6239 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 11:59:43.576571 6239 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 11:59:43.576572 6239 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:59:43.576606 6239 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 11:59:43.576650 6239 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 11:59:43.576658 6239 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:59:43.576694 6239 factory.go:656] Stopping watch factory\\\\nI1124 11:59:43.576718 6239 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:43.576728 6239 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.506434 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.516996 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.527613 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.567572 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.567849 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.567920 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.567994 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.568055 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:45Z","lastTransitionTime":"2025-11-24T11:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.670769 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.670809 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.670819 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.670835 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.670845 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:45Z","lastTransitionTime":"2025-11-24T11:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.773159 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.773194 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.773203 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.773216 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.773226 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:45Z","lastTransitionTime":"2025-11-24T11:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.876790 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.876845 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.876857 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.876878 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.876891 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:45Z","lastTransitionTime":"2025-11-24T11:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.979094 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.979132 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.979141 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.979155 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:45 crc kubenswrapper[4930]: I1124 11:59:45.979166 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:45Z","lastTransitionTime":"2025-11-24T11:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.081655 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.081737 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.081756 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.081788 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.081807 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:46Z","lastTransitionTime":"2025-11-24T11:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.184934 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.184981 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.184995 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.185017 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.185034 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:46Z","lastTransitionTime":"2025-11-24T11:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.287677 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.287711 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.287721 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.287943 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.287959 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:46Z","lastTransitionTime":"2025-11-24T11:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.354994 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/1.log" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.355650 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/0.log" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.358288 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece" exitCode=1 Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.358329 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece"} Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.358382 4930 scope.go:117] "RemoveContainer" containerID="e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.358957 4930 scope.go:117] "RemoveContainer" containerID="b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece" Nov 24 11:59:46 crc kubenswrapper[4930]: E1124 11:59:46.359099 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.377876 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.389929 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.390498 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.390603 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.390625 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.390656 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.390678 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:46Z","lastTransitionTime":"2025-11-24T11:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.401052 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.415445 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.437821 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:43Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:43.575879 6239 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:59:43.576387 6239 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:43.576429 6239 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:59:43.576458 6239 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:43.576517 6239 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 11:59:43.576533 6239 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 11:59:43.576571 6239 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 11:59:43.576572 6239 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:59:43.576606 6239 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 11:59:43.576650 6239 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 11:59:43.576658 6239 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:59:43.576694 6239 factory.go:656] Stopping watch factory\\\\nI1124 11:59:43.576718 6239 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:43.576728 6239 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"message\\\":\\\"urable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:59:45.621068 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:59:45.621064 6361 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.452782 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.459976 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.467672 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.482281 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.493303 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.493360 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.493375 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.493399 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.493416 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:46Z","lastTransitionTime":"2025-11-24T11:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.501411 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.514769 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.528020 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.547514 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.563401 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.574202 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.584532 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.596156 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.596231 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.596247 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.596274 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.596291 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:46Z","lastTransitionTime":"2025-11-24T11:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.597237 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.613035 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.633029 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:43Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:43.575879 6239 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:59:43.576387 6239 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:43.576429 6239 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:59:43.576458 6239 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:43.576517 6239 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 11:59:43.576533 6239 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 11:59:43.576571 6239 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 11:59:43.576572 6239 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:59:43.576606 6239 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 11:59:43.576650 6239 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 11:59:43.576658 6239 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:59:43.576694 6239 factory.go:656] Stopping watch factory\\\\nI1124 11:59:43.576718 6239 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:43.576728 6239 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"message\\\":\\\"urable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:59:45.621068 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:59:45.621064 6361 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.651951 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.663607 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.674646 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.687051 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.698141 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.698435 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.698467 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.698476 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.698489 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.698497 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:46Z","lastTransitionTime":"2025-11-24T11:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.708905 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.719657 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.735099 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.750162 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.761029 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.801315 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.801366 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.801379 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.801396 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.801430 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:46Z","lastTransitionTime":"2025-11-24T11:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.904005 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.904053 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.904069 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.904089 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:46 crc kubenswrapper[4930]: I1124 11:59:46.904103 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:46Z","lastTransitionTime":"2025-11-24T11:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.006969 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.007018 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.007033 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.007053 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.007065 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:47Z","lastTransitionTime":"2025-11-24T11:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.084307 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.084300 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.084343 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:47 crc kubenswrapper[4930]: E1124 11:59:47.084667 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:47 crc kubenswrapper[4930]: E1124 11:59:47.084470 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:47 crc kubenswrapper[4930]: E1124 11:59:47.084827 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.109562 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.109622 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.109635 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.109660 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.109682 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:47Z","lastTransitionTime":"2025-11-24T11:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.213427 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.213477 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.213492 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.213510 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.213522 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:47Z","lastTransitionTime":"2025-11-24T11:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.249926 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk"] Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.250728 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.255858 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.255920 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.272473 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.279286 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.279393 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk22n\" (UniqueName: \"kubernetes.io/projected/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-kube-api-access-hk22n\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.280019 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.280088 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.292097 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.307211 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.316202 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.316239 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.316249 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.316262 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.316271 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:47Z","lastTransitionTime":"2025-11-24T11:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.323013 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.336848 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.356408 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d73aa53cd63dc9dab5642a63be66529e339b1cf223d73d13c03dd0103b640\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:43Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:43.575879 6239 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:59:43.576387 6239 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:43.576429 6239 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:59:43.576458 6239 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:43.576517 6239 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 11:59:43.576533 6239 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 11:59:43.576571 6239 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 11:59:43.576572 6239 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 11:59:43.576606 6239 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 11:59:43.576650 6239 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 11:59:43.576658 6239 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:59:43.576694 6239 factory.go:656] Stopping watch factory\\\\nI1124 11:59:43.576718 6239 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:43.576728 6239 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"message\\\":\\\"urable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:59:45.621068 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:59:45.621064 6361 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.362619 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/1.log" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.365765 4930 scope.go:117] "RemoveContainer" containerID="b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece" Nov 24 11:59:47 crc kubenswrapper[4930]: E1124 11:59:47.365937 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.374293 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.380977 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.381060 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.381108 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.381163 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk22n\" (UniqueName: \"kubernetes.io/projected/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-kube-api-access-hk22n\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.381801 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.382668 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.387480 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.395429 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.400316 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk22n\" (UniqueName: \"kubernetes.io/projected/4e677c41-2d4e-47b3-840b-cd43f1c5ed34-kube-api-access-hk22n\") pod \"ovnkube-control-plane-749d76644c-4vsnk\" (UID: \"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.406349 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.418171 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.418561 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.418600 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.418614 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.418631 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.418643 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:47Z","lastTransitionTime":"2025-11-24T11:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.429749 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.440255 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.449158 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.458401 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.471229 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.484682 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.496751 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.512007 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.520240 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.520263 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.520271 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.520285 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.520294 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:47Z","lastTransitionTime":"2025-11-24T11:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.523880 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.536745 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.549951 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.568018 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"message\\\":\\\"urable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:59:45.621068 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:59:45.621064 6361 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.571277 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" Nov 24 11:59:47 crc kubenswrapper[4930]: W1124 11:59:47.584828 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e677c41_2d4e_47b3_840b_cd43f1c5ed34.slice/crio-2786aff48f663f791986ddff5be00033f4622f3aae69477b0f2c3501875b84a1 WatchSource:0}: Error finding container 2786aff48f663f791986ddff5be00033f4622f3aae69477b0f2c3501875b84a1: Status 404 returned error can't find the container with id 2786aff48f663f791986ddff5be00033f4622f3aae69477b0f2c3501875b84a1 Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.588862 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.603126 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.616828 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.625414 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.625503 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.625529 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.625636 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.625665 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:47Z","lastTransitionTime":"2025-11-24T11:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.629268 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.642656 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.653660 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.664273 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.676120 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.727894 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.727935 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.727944 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.727960 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.727968 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:47Z","lastTransitionTime":"2025-11-24T11:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.830851 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.831155 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.831236 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.831318 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.831400 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:47Z","lastTransitionTime":"2025-11-24T11:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.933497 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.933528 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.933568 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.933585 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:47 crc kubenswrapper[4930]: I1124 11:59:47.933593 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:47Z","lastTransitionTime":"2025-11-24T11:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.005250 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-r4jtv"] Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.005969 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.006048 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.021716 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.036495 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.036552 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.036563 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.036579 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.036592 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:48Z","lastTransitionTime":"2025-11-24T11:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.040282 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.056114 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.067319 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.085894 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.099455 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.109867 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.125050 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.140844 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.158120 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.174296 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.191479 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fg4g\" (UniqueName: \"kubernetes.io/projected/96ced043-6cad-4f17-8648-624f36bf14f1-kube-api-access-7fg4g\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.191582 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.192701 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"message\\\":\\\"urable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:59:45.621068 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:59:45.621064 6361 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.208694 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.219822 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.229364 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.238056 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.238080 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.238089 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.238101 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.238111 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:48Z","lastTransitionTime":"2025-11-24T11:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.241457 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.293234 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fg4g\" (UniqueName: \"kubernetes.io/projected/96ced043-6cad-4f17-8648-624f36bf14f1-kube-api-access-7fg4g\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.293696 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.293802 4930 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.293922 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs podName:96ced043-6cad-4f17-8648-624f36bf14f1 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:48.793899428 +0000 UTC m=+35.408227388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs") pod "network-metrics-daemon-r4jtv" (UID: "96ced043-6cad-4f17-8648-624f36bf14f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.317807 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fg4g\" (UniqueName: \"kubernetes.io/projected/96ced043-6cad-4f17-8648-624f36bf14f1-kube-api-access-7fg4g\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.340424 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.340459 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.340467 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.340479 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.340490 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:48Z","lastTransitionTime":"2025-11-24T11:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.369396 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" event={"ID":"4e677c41-2d4e-47b3-840b-cd43f1c5ed34","Type":"ContainerStarted","Data":"d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.370301 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" event={"ID":"4e677c41-2d4e-47b3-840b-cd43f1c5ed34","Type":"ContainerStarted","Data":"c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.370385 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" event={"ID":"4e677c41-2d4e-47b3-840b-cd43f1c5ed34","Type":"ContainerStarted","Data":"2786aff48f663f791986ddff5be00033f4622f3aae69477b0f2c3501875b84a1"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.381752 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.394444 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.405764 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.415318 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.426466 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.443264 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.443307 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.443316 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.443331 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.443332 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.443341 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:48Z","lastTransitionTime":"2025-11-24T11:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.451785 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.462715 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.472590 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.483253 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.499861 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"message\\\":\\\"urable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:59:45.621068 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:59:45.621064 6361 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.512714 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.522729 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.532783 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.543319 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.545639 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.545664 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.545675 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.545690 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.545702 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:48Z","lastTransitionTime":"2025-11-24T11:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.554562 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.647990 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.648260 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.648338 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.648409 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.648465 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:48Z","lastTransitionTime":"2025-11-24T11:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.750970 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.751026 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.751034 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.751050 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.751058 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:48Z","lastTransitionTime":"2025-11-24T11:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.799183 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.799285 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.799314 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.799330 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.799425 4930 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.799470 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:04.799458564 +0000 UTC m=+51.413786514 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.799759 4930 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.799815 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:00:04.799790832 +0000 UTC m=+51.414118842 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.799769 4930 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.799851 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:04.799837644 +0000 UTC m=+51.414165624 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.799874 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs podName:96ced043-6cad-4f17-8648-624f36bf14f1 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:49.799862174 +0000 UTC m=+36.414190164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs") pod "network-metrics-daemon-r4jtv" (UID: "96ced043-6cad-4f17-8648-624f36bf14f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.854459 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.854796 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.854894 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.854996 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.855088 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:48Z","lastTransitionTime":"2025-11-24T11:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.900067 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.900131 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.900282 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.900303 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.900299 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.900349 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.900363 4930 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.900317 4930 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.900413 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:04.900396504 +0000 UTC m=+51.514724454 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:48 crc kubenswrapper[4930]: E1124 11:59:48.900429 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:04.900422895 +0000 UTC m=+51.514750845 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.957222 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.957274 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.957284 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.957297 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:48 crc kubenswrapper[4930]: I1124 11:59:48.957306 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:48Z","lastTransitionTime":"2025-11-24T11:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.034115 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.034145 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.034153 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.034168 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.034179 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.046046 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.049348 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.049483 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.049566 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.049648 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.049706 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.060455 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.064082 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.064135 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.064148 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.064164 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.064175 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.075908 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.080165 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.080209 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.080221 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.080238 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.080250 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.084056 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.084093 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.084137 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.084190 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.084315 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.084399 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.091904 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.097080 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.097131 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.097151 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.097174 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.097193 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.114204 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.115425 4930 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.117135 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.117158 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.117166 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.117178 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.117187 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.220683 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.221207 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.221599 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.221932 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.222189 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.325068 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.325107 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.325118 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.325141 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.325151 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.428056 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.428104 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.428116 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.428133 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.428142 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.530269 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.530305 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.530339 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.530358 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.530371 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.633233 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.633285 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.633297 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.633316 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.633331 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.736227 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.736263 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.736272 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.736288 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.736298 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.808316 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.808487 4930 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:59:49 crc kubenswrapper[4930]: E1124 11:59:49.808607 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs podName:96ced043-6cad-4f17-8648-624f36bf14f1 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:51.808585523 +0000 UTC m=+38.422913473 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs") pod "network-metrics-daemon-r4jtv" (UID: "96ced043-6cad-4f17-8648-624f36bf14f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.839835 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.839878 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.839890 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.839907 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.839918 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.942257 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.942296 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.942305 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.942320 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:49 crc kubenswrapper[4930]: I1124 11:59:49.942330 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:49Z","lastTransitionTime":"2025-11-24T11:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.046670 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.046740 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.046759 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.046783 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.046801 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:50Z","lastTransitionTime":"2025-11-24T11:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.084312 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:50 crc kubenswrapper[4930]: E1124 11:59:50.084509 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.149960 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.149993 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.150004 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.150017 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.150025 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:50Z","lastTransitionTime":"2025-11-24T11:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.252585 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.252649 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.252663 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.252676 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.252687 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:50Z","lastTransitionTime":"2025-11-24T11:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.354945 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.354979 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.354988 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.355000 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.355009 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:50Z","lastTransitionTime":"2025-11-24T11:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.457372 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.457406 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.457417 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.457458 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.457470 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:50Z","lastTransitionTime":"2025-11-24T11:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.558907 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.558943 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.558954 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.558970 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.558978 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:50Z","lastTransitionTime":"2025-11-24T11:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.662387 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.662452 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.662473 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.662503 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.662523 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:50Z","lastTransitionTime":"2025-11-24T11:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.765107 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.765177 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.765200 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.765231 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.765254 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:50Z","lastTransitionTime":"2025-11-24T11:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.868205 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.868284 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.868299 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.868322 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.868334 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:50Z","lastTransitionTime":"2025-11-24T11:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.970966 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.971007 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.971020 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.971051 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:50 crc kubenswrapper[4930]: I1124 11:59:50.971063 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:50Z","lastTransitionTime":"2025-11-24T11:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.073708 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.073786 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.073810 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.073841 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.073862 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:51Z","lastTransitionTime":"2025-11-24T11:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.083995 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.084091 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.084091 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:51 crc kubenswrapper[4930]: E1124 11:59:51.084252 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:51 crc kubenswrapper[4930]: E1124 11:59:51.085144 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:51 crc kubenswrapper[4930]: E1124 11:59:51.085647 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.176763 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.176979 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.177088 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.177185 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.177274 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:51Z","lastTransitionTime":"2025-11-24T11:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.279745 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.279802 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.279816 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.279835 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.279848 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:51Z","lastTransitionTime":"2025-11-24T11:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.382099 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.382368 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.382485 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.382578 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.382638 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:51Z","lastTransitionTime":"2025-11-24T11:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.485444 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.485512 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.485530 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.485572 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.485586 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:51Z","lastTransitionTime":"2025-11-24T11:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.588136 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.588535 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.588802 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.588934 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.589073 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:51Z","lastTransitionTime":"2025-11-24T11:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.692002 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.692340 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.692491 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.692655 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.692901 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:51Z","lastTransitionTime":"2025-11-24T11:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.795933 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.795997 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.796010 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.796028 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.796039 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:51Z","lastTransitionTime":"2025-11-24T11:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.827787 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:51 crc kubenswrapper[4930]: E1124 11:59:51.827981 4930 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:59:51 crc kubenswrapper[4930]: E1124 11:59:51.828343 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs podName:96ced043-6cad-4f17-8648-624f36bf14f1 nodeName:}" failed. No retries permitted until 2025-11-24 11:59:55.828314631 +0000 UTC m=+42.442642611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs") pod "network-metrics-daemon-r4jtv" (UID: "96ced043-6cad-4f17-8648-624f36bf14f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.899092 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.899137 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.899149 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.899164 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:51 crc kubenswrapper[4930]: I1124 11:59:51.899172 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:51Z","lastTransitionTime":"2025-11-24T11:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.001986 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.002030 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.002044 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.002062 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.002103 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:52Z","lastTransitionTime":"2025-11-24T11:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.084823 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:52 crc kubenswrapper[4930]: E1124 11:59:52.085182 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.104230 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.104264 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.104274 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.104289 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.104298 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:52Z","lastTransitionTime":"2025-11-24T11:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.206424 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.206792 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.206921 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.207010 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.207124 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:52Z","lastTransitionTime":"2025-11-24T11:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.309361 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.309393 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.309404 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.309418 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.309427 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:52Z","lastTransitionTime":"2025-11-24T11:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.412850 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.412895 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.412907 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.412925 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.412941 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:52Z","lastTransitionTime":"2025-11-24T11:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.514874 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.514906 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.514914 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.514931 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.514940 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:52Z","lastTransitionTime":"2025-11-24T11:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.617836 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.617887 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.617906 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.617929 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.617946 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:52Z","lastTransitionTime":"2025-11-24T11:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.720589 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.720887 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.720982 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.721069 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.721162 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:52Z","lastTransitionTime":"2025-11-24T11:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.823455 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.823495 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.823506 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.823521 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.823530 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:52Z","lastTransitionTime":"2025-11-24T11:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.926263 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.926524 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.926695 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.926996 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:52 crc kubenswrapper[4930]: I1124 11:59:52.927112 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:52Z","lastTransitionTime":"2025-11-24T11:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.029852 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.029892 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.029905 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.029920 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.029931 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:53Z","lastTransitionTime":"2025-11-24T11:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.083823 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.083823 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:53 crc kubenswrapper[4930]: E1124 11:59:53.084406 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.083895 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:53 crc kubenswrapper[4930]: E1124 11:59:53.084266 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:53 crc kubenswrapper[4930]: E1124 11:59:53.084509 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.132668 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.132702 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.132713 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.132729 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.132740 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:53Z","lastTransitionTime":"2025-11-24T11:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.235407 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.235462 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.235475 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.235491 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.235505 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:53Z","lastTransitionTime":"2025-11-24T11:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.337711 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.337745 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.337756 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.337772 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.337783 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:53Z","lastTransitionTime":"2025-11-24T11:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.440042 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.440108 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.440120 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.440135 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.440145 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:53Z","lastTransitionTime":"2025-11-24T11:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.542773 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.543408 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.543727 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.543814 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.543876 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:53Z","lastTransitionTime":"2025-11-24T11:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.647649 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.647738 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.647766 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.647800 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.647829 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:53Z","lastTransitionTime":"2025-11-24T11:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.750472 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.750526 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.750550 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.750571 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.750584 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:53Z","lastTransitionTime":"2025-11-24T11:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.853284 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.853730 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.853828 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.853918 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.853987 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:53Z","lastTransitionTime":"2025-11-24T11:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.956655 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.957134 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.957270 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.957375 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:53 crc kubenswrapper[4930]: I1124 11:59:53.957479 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:53Z","lastTransitionTime":"2025-11-24T11:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.060010 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.060090 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.060116 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.060149 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.060172 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:54Z","lastTransitionTime":"2025-11-24T11:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.084851 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:54 crc kubenswrapper[4930]: E1124 11:59:54.085084 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.100759 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.118330 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.133481 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.153570 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.162823 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.162867 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.162883 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.162903 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.162917 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:54Z","lastTransitionTime":"2025-11-24T11:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.171699 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.206371 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"message\\\":\\\"urable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:59:45.621068 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:59:45.621064 6361 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.224011 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.243561 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.257995 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.264766 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.264814 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.264824 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.264840 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.264850 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:54Z","lastTransitionTime":"2025-11-24T11:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.271987 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.286412 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.298562 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.310861 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.325722 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.340924 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.353469 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.367466 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.367521 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.367572 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.367595 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.367612 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:54Z","lastTransitionTime":"2025-11-24T11:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.469814 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.469856 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.469867 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.469884 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.469896 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:54Z","lastTransitionTime":"2025-11-24T11:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.572821 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.573702 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.573737 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.573762 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.573782 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:54Z","lastTransitionTime":"2025-11-24T11:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.676437 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.676476 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.676485 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.676499 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.676508 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:54Z","lastTransitionTime":"2025-11-24T11:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.779217 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.779599 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.779753 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.779875 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.779987 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:54Z","lastTransitionTime":"2025-11-24T11:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.882578 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.882633 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.882650 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.882669 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.882687 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:54Z","lastTransitionTime":"2025-11-24T11:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.984838 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.984882 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.984893 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.984908 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:54 crc kubenswrapper[4930]: I1124 11:59:54.984919 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:54Z","lastTransitionTime":"2025-11-24T11:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.084390 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.084417 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.084592 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:55 crc kubenswrapper[4930]: E1124 11:59:55.084524 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:55 crc kubenswrapper[4930]: E1124 11:59:55.084709 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:55 crc kubenswrapper[4930]: E1124 11:59:55.084819 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.087054 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.087134 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.087154 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.087173 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.087232 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:55Z","lastTransitionTime":"2025-11-24T11:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.190038 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.190079 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.190091 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.190108 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.190121 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:55Z","lastTransitionTime":"2025-11-24T11:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.293234 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.293302 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.293320 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.293344 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.293363 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:55Z","lastTransitionTime":"2025-11-24T11:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.394875 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.394905 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.394912 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.394924 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.394932 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:55Z","lastTransitionTime":"2025-11-24T11:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.496844 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.496890 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.496910 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.496930 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.496943 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:55Z","lastTransitionTime":"2025-11-24T11:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.599436 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.599479 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.599490 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.599509 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.599522 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:55Z","lastTransitionTime":"2025-11-24T11:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.702128 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.702162 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.702170 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.702182 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.702190 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:55Z","lastTransitionTime":"2025-11-24T11:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.805048 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.805101 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.805116 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.805135 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.805149 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:55Z","lastTransitionTime":"2025-11-24T11:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.877211 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:55 crc kubenswrapper[4930]: E1124 11:59:55.877419 4930 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:59:55 crc kubenswrapper[4930]: E1124 11:59:55.877566 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs podName:96ced043-6cad-4f17-8648-624f36bf14f1 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:03.877510636 +0000 UTC m=+50.491838606 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs") pod "network-metrics-daemon-r4jtv" (UID: "96ced043-6cad-4f17-8648-624f36bf14f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.907684 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.907745 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.907763 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.907787 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:55 crc kubenswrapper[4930]: I1124 11:59:55.907804 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:55Z","lastTransitionTime":"2025-11-24T11:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.010619 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.010676 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.010689 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.010708 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.010721 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:56Z","lastTransitionTime":"2025-11-24T11:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.084063 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:56 crc kubenswrapper[4930]: E1124 11:59:56.084207 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.113656 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.113698 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.113707 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.113721 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.113733 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:56Z","lastTransitionTime":"2025-11-24T11:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.216120 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.216193 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.216231 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.216250 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.216266 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:56Z","lastTransitionTime":"2025-11-24T11:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.319052 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.319110 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.319122 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.319139 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.319150 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:56Z","lastTransitionTime":"2025-11-24T11:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.422274 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.422356 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.422370 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.422384 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.422394 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:56Z","lastTransitionTime":"2025-11-24T11:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.524221 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.524245 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.524253 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.524264 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.524273 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:56Z","lastTransitionTime":"2025-11-24T11:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.626428 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.626504 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.626528 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.626606 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.626629 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:56Z","lastTransitionTime":"2025-11-24T11:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.728742 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.728773 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.728784 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.728810 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.728822 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:56Z","lastTransitionTime":"2025-11-24T11:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.834724 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.834774 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.834798 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.834820 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.834837 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:56Z","lastTransitionTime":"2025-11-24T11:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.938153 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.938201 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.938212 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.938231 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:56 crc kubenswrapper[4930]: I1124 11:59:56.938243 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:56Z","lastTransitionTime":"2025-11-24T11:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.041337 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.041377 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.041390 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.041408 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.041419 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:57Z","lastTransitionTime":"2025-11-24T11:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.083955 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.084025 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:57 crc kubenswrapper[4930]: E1124 11:59:57.084184 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:57 crc kubenswrapper[4930]: E1124 11:59:57.084308 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.084421 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:57 crc kubenswrapper[4930]: E1124 11:59:57.084640 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.143259 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.143293 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.143341 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.143357 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.143369 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:57Z","lastTransitionTime":"2025-11-24T11:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.245830 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.245863 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.245873 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.245886 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.245895 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:57Z","lastTransitionTime":"2025-11-24T11:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.348306 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.348354 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.348366 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.348383 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.348395 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:57Z","lastTransitionTime":"2025-11-24T11:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.450702 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.450739 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.450751 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.450768 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.450779 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:57Z","lastTransitionTime":"2025-11-24T11:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.553297 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.553338 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.553348 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.553365 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.553374 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:57Z","lastTransitionTime":"2025-11-24T11:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.655434 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.655474 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.655486 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.655501 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.655513 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:57Z","lastTransitionTime":"2025-11-24T11:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.758331 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.758363 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.758371 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.758383 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.758392 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:57Z","lastTransitionTime":"2025-11-24T11:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.862696 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.862733 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.862747 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.862767 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.862794 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:57Z","lastTransitionTime":"2025-11-24T11:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.964995 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.965050 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.965065 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.965084 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:57 crc kubenswrapper[4930]: I1124 11:59:57.965099 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:57Z","lastTransitionTime":"2025-11-24T11:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.067821 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.067861 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.067871 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.067886 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.067898 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:58Z","lastTransitionTime":"2025-11-24T11:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.084448 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 11:59:58 crc kubenswrapper[4930]: E1124 11:59:58.084645 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.170642 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.170711 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.170728 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.170751 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.170769 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:58Z","lastTransitionTime":"2025-11-24T11:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.273379 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.273412 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.273420 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.273435 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.273446 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:58Z","lastTransitionTime":"2025-11-24T11:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.375894 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.375960 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.375978 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.376008 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.376030 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:58Z","lastTransitionTime":"2025-11-24T11:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.478259 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.478297 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.478308 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.478325 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.478334 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:58Z","lastTransitionTime":"2025-11-24T11:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.580920 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.580967 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.580980 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.581001 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.581015 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:58Z","lastTransitionTime":"2025-11-24T11:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.683519 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.683604 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.683623 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.683649 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.683666 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:58Z","lastTransitionTime":"2025-11-24T11:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.786211 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.786238 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.786246 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.786260 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.786269 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:58Z","lastTransitionTime":"2025-11-24T11:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.888700 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.888742 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.888753 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.888768 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.888779 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:58Z","lastTransitionTime":"2025-11-24T11:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.991009 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.991052 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.991064 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.991081 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:58 crc kubenswrapper[4930]: I1124 11:59:58.991094 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:58Z","lastTransitionTime":"2025-11-24T11:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.084221 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.084317 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:59:59 crc kubenswrapper[4930]: E1124 11:59:59.084361 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.084376 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:59:59 crc kubenswrapper[4930]: E1124 11:59:59.084736 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:59:59 crc kubenswrapper[4930]: E1124 11:59:59.084880 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.086044 4930 scope.go:117] "RemoveContainer" containerID="b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.092629 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.092656 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.092665 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.092676 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.092685 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.195489 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.195783 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.195792 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.195807 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.195817 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.213720 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.213786 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.213802 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.213824 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.213837 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: E1124 11:59:59.231223 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.234838 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.234867 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.234876 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.234891 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.234901 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: E1124 11:59:59.252792 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.257296 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.257344 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.257354 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.257368 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.257377 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: E1124 11:59:59.270654 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.274594 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.274639 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.274650 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.274665 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.274678 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: E1124 11:59:59.285786 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.289296 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.289325 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.289333 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.289346 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.289355 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: E1124 11:59:59.303930 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: E1124 11:59:59.304170 4930 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.305716 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.305748 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.305757 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.305772 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.305782 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.407575 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/1.log" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.407638 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.407673 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.407684 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.407697 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.407709 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.410313 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e"} Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.410435 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.425169 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.434925 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.454500 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"message\\\":\\\"urable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:59:45.621068 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:59:45.621064 6361 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.468229 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.482468 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.498261 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.509961 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.510013 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.510032 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.510058 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.510074 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.515871 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.537512 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.552550 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.564654 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.581242 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.600347 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.612635 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.612674 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.612684 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.612697 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.612706 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.613965 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.629763 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.641419 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.674858 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.716208 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.716251 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.716265 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.716283 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.716297 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.820717 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.820775 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.820791 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.820812 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.820826 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.923565 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.923611 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.923626 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.923647 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:59:59 crc kubenswrapper[4930]: I1124 11:59:59.923662 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:59:59Z","lastTransitionTime":"2025-11-24T11:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.025322 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.025360 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.025368 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.025383 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.025395 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:00Z","lastTransitionTime":"2025-11-24T12:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.084017 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:00 crc kubenswrapper[4930]: E1124 12:00:00.084135 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.128129 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.128179 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.128196 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.128216 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.128233 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:00Z","lastTransitionTime":"2025-11-24T12:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.230116 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.230159 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.230170 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.230185 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.230195 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:00Z","lastTransitionTime":"2025-11-24T12:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.332821 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.332855 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.332865 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.332880 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.332891 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:00Z","lastTransitionTime":"2025-11-24T12:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.415927 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/2.log" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.416928 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/1.log" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.419821 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e" exitCode=1 Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.419868 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.419903 4930 scope.go:117] "RemoveContainer" containerID="b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.421224 4930 scope.go:117] "RemoveContainer" containerID="b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e" Nov 24 12:00:00 crc kubenswrapper[4930]: E1124 12:00:00.421519 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.434549 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.435628 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.435681 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.435691 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.435706 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.435715 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:00Z","lastTransitionTime":"2025-11-24T12:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.446759 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.456176 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.464681 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.477261 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.488623 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.497647 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.508786 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.518382 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.529899 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.537838 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.537897 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.537908 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.537924 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.537934 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:00Z","lastTransitionTime":"2025-11-24T12:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.548559 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b173dd49c56adb54c770d061f9f38a86e33f225d1bfbb439b3367fe407159ece\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"message\\\":\\\"urable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:59:45.621068 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:59:45Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:59:45.621064 6361 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rul\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"ller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:59.940653 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:59:59.940962 6580 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 11:59:59.940986 6580 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 11:59:59.941134 6580 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:59:59.944340 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:59.944405 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:59.946174 6580 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:59:59.946310 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:59:59.946330 6580 factory.go:656] Stopping watch factory\\\\nI1124 11:59:59.946346 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:59.946403 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:59:59.946505 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.564245 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.576038 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.586367 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.595084 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.605303 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:00Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.640507 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.640766 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.640925 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.641069 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.641204 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:00Z","lastTransitionTime":"2025-11-24T12:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.744636 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.744676 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.744686 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.744703 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.744712 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:00Z","lastTransitionTime":"2025-11-24T12:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.847720 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.847755 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.847763 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.847777 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.847786 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:00Z","lastTransitionTime":"2025-11-24T12:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.950297 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.950340 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.950350 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.950366 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:00 crc kubenswrapper[4930]: I1124 12:00:00.950377 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:00Z","lastTransitionTime":"2025-11-24T12:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.052062 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.052147 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.052166 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.052198 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.052216 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:01Z","lastTransitionTime":"2025-11-24T12:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.084447 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.084490 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.084487 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:01 crc kubenswrapper[4930]: E1124 12:00:01.084651 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:01 crc kubenswrapper[4930]: E1124 12:00:01.084749 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:01 crc kubenswrapper[4930]: E1124 12:00:01.084838 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.154598 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.154641 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.154655 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.154675 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.154685 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:01Z","lastTransitionTime":"2025-11-24T12:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.257431 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.257474 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.257485 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.257500 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.257509 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:01Z","lastTransitionTime":"2025-11-24T12:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.360172 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.360219 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.360243 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.360266 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.360279 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:01Z","lastTransitionTime":"2025-11-24T12:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.423913 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/2.log" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.462293 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.462338 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.462346 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.462361 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.462370 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:01Z","lastTransitionTime":"2025-11-24T12:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.565030 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.565081 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.565093 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.565109 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.565120 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:01Z","lastTransitionTime":"2025-11-24T12:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.668314 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.668351 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.668361 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.668373 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.668382 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:01Z","lastTransitionTime":"2025-11-24T12:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.770722 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.770784 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.770794 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.770826 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.770838 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:01Z","lastTransitionTime":"2025-11-24T12:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.873979 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.874025 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.874055 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.874069 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.874078 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:01Z","lastTransitionTime":"2025-11-24T12:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.977268 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.977627 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.977777 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.977909 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:01 crc kubenswrapper[4930]: I1124 12:00:01.977999 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:01Z","lastTransitionTime":"2025-11-24T12:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.080605 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.080642 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.080652 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.080665 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.080675 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:02Z","lastTransitionTime":"2025-11-24T12:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.083955 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:02 crc kubenswrapper[4930]: E1124 12:00:02.084110 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.184851 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.185129 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.185256 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.185453 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.185601 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:02Z","lastTransitionTime":"2025-11-24T12:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.290132 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.290829 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.290846 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.290876 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.290887 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:02Z","lastTransitionTime":"2025-11-24T12:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.394001 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.394054 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.394069 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.394087 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.394100 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:02Z","lastTransitionTime":"2025-11-24T12:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.496657 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.496762 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.496773 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.496789 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.496800 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:02Z","lastTransitionTime":"2025-11-24T12:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.600119 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.600167 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.600183 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.600202 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.600217 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:02Z","lastTransitionTime":"2025-11-24T12:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.703598 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.703665 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.703689 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.703717 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.703734 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:02Z","lastTransitionTime":"2025-11-24T12:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.806556 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.806607 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.806619 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.806638 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.806654 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:02Z","lastTransitionTime":"2025-11-24T12:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.877014 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.877826 4930 scope.go:117] "RemoveContainer" containerID="b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e" Nov 24 12:00:02 crc kubenswrapper[4930]: E1124 12:00:02.877975 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.898959 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:02Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.909280 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.909345 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.909363 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.909387 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.909414 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:02Z","lastTransitionTime":"2025-11-24T12:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.912576 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:02Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.933858 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:02Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.951052 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:02Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.969631 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:02Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:02 crc kubenswrapper[4930]: I1124 12:00:02.983933 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:02Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.005015 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"ller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:59.940653 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:59:59.940962 6580 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 11:59:59.940986 6580 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 11:59:59.941134 6580 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:59:59.944340 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:59.944405 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:59.946174 6580 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:59:59.946310 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:59:59.946330 6580 factory.go:656] Stopping watch factory\\\\nI1124 11:59:59.946346 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:59.946403 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:59:59.946505 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:03Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.012302 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.012337 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.012348 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.012365 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.012376 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:03Z","lastTransitionTime":"2025-11-24T12:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.022130 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:03Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.038859 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:03Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.052184 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:03Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.067363 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:03Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.077961 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:03Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.084447 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.084470 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:03 crc kubenswrapper[4930]: E1124 12:00:03.084583 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.084613 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:03 crc kubenswrapper[4930]: E1124 12:00:03.084719 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:03 crc kubenswrapper[4930]: E1124 12:00:03.084757 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.089611 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:03Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.098325 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:03Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.109453 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:03Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.115023 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.115062 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.115077 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.115096 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.115110 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:03Z","lastTransitionTime":"2025-11-24T12:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.119706 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:03Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.217587 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.217624 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.217632 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.217647 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.217658 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:03Z","lastTransitionTime":"2025-11-24T12:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.319859 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.319900 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.319909 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.319922 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.319931 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:03Z","lastTransitionTime":"2025-11-24T12:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.422112 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.422146 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.422155 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.422169 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.422179 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:03Z","lastTransitionTime":"2025-11-24T12:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.524609 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.524895 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.525065 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.525169 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.525249 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:03Z","lastTransitionTime":"2025-11-24T12:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.628318 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.628364 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.628374 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.628388 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.628398 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:03Z","lastTransitionTime":"2025-11-24T12:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.730629 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.730699 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.730724 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.730751 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.730770 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:03Z","lastTransitionTime":"2025-11-24T12:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.833136 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.833956 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.834112 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.834234 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.834353 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:03Z","lastTransitionTime":"2025-11-24T12:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.936318 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.936361 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.936372 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.936388 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.936400 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:03Z","lastTransitionTime":"2025-11-24T12:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:03 crc kubenswrapper[4930]: I1124 12:00:03.959305 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:03 crc kubenswrapper[4930]: E1124 12:00:03.959482 4930 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:00:03 crc kubenswrapper[4930]: E1124 12:00:03.959566 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs podName:96ced043-6cad-4f17-8648-624f36bf14f1 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:19.959527752 +0000 UTC m=+66.573855712 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs") pod "network-metrics-daemon-r4jtv" (UID: "96ced043-6cad-4f17-8648-624f36bf14f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.038715 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.038761 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.038772 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.038792 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.038805 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:04Z","lastTransitionTime":"2025-11-24T12:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.084684 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.084862 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.101574 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.113882 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.131971 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.141862 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.141900 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.141910 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.141925 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.141934 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:04Z","lastTransitionTime":"2025-11-24T12:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.145793 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.162494 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"ller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:59.940653 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:59:59.940962 6580 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 11:59:59.940986 6580 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 11:59:59.941134 6580 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:59:59.944340 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:59.944405 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:59.946174 6580 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:59:59.946310 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:59:59.946330 6580 factory.go:656] Stopping watch factory\\\\nI1124 11:59:59.946346 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:59.946403 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:59:59.946505 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.175966 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.187670 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.199864 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.210656 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.222774 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.234515 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.244524 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.244601 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.244623 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.244647 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.244668 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:04Z","lastTransitionTime":"2025-11-24T12:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.248059 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.259718 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.271040 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.286960 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.299799 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.347633 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.347667 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.347678 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.347695 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.347708 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:04Z","lastTransitionTime":"2025-11-24T12:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.450463 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.450739 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.450800 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.450870 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.450964 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:04Z","lastTransitionTime":"2025-11-24T12:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.552961 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.552999 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.553009 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.553022 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.553033 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:04Z","lastTransitionTime":"2025-11-24T12:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.664926 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.664968 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.664980 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.664995 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.665007 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:04Z","lastTransitionTime":"2025-11-24T12:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.719344 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.728468 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.730846 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.742112 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.752789 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.764772 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.767643 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.767713 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.767725 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.767740 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.767751 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:04Z","lastTransitionTime":"2025-11-24T12:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.775813 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.786452 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.802496 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"ller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:59.940653 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:59:59.940962 6580 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 11:59:59.940986 6580 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 11:59:59.941134 6580 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:59:59.944340 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:59.944405 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:59.946174 6580 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:59:59.946310 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:59:59.946330 6580 factory.go:656] Stopping watch factory\\\\nI1124 11:59:59.946346 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:59.946403 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:59:59.946505 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.814783 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.823914 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.833309 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.841291 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.851987 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.863191 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.865307 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.865396 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.865423 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.865437 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:00:36.865421951 +0000 UTC m=+83.479749901 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.865517 4930 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.865518 4930 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.865644 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:36.865629587 +0000 UTC m=+83.479957557 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.865723 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:36.865707369 +0000 UTC m=+83.480035319 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.871223 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.871289 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.871307 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.871327 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.871340 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:04Z","lastTransitionTime":"2025-11-24T12:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.876564 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.885354 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.893588 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:04Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.966436 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.966652 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.966659 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.966689 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.966725 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.966756 4930 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.966812 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:36.966800183 +0000 UTC m=+83.581128133 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.966671 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.966854 4930 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:00:04 crc kubenswrapper[4930]: E1124 12:00:04.966940 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:36.966918287 +0000 UTC m=+83.581246257 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.974594 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.974642 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.974656 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.974678 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:04 crc kubenswrapper[4930]: I1124 12:00:04.974694 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:04Z","lastTransitionTime":"2025-11-24T12:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.076837 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.076874 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.076884 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.076898 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.076909 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:05Z","lastTransitionTime":"2025-11-24T12:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.084289 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.084308 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:05 crc kubenswrapper[4930]: E1124 12:00:05.084414 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.084471 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:05 crc kubenswrapper[4930]: E1124 12:00:05.084645 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:05 crc kubenswrapper[4930]: E1124 12:00:05.084706 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.179327 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.179383 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.179397 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.179417 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.179430 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:05Z","lastTransitionTime":"2025-11-24T12:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.282519 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.282588 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.282605 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.282624 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.282637 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:05Z","lastTransitionTime":"2025-11-24T12:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.385242 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.385307 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.385320 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.385337 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.385348 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:05Z","lastTransitionTime":"2025-11-24T12:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.487666 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.487726 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.487743 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.487771 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.487789 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:05Z","lastTransitionTime":"2025-11-24T12:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.590497 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.590552 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.590564 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.590580 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.590591 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:05Z","lastTransitionTime":"2025-11-24T12:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.692711 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.692744 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.692752 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.692766 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.692775 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:05Z","lastTransitionTime":"2025-11-24T12:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.795078 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.795130 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.795143 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.795157 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.795190 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:05Z","lastTransitionTime":"2025-11-24T12:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.897458 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.897530 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.897583 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.897610 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.897629 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:05Z","lastTransitionTime":"2025-11-24T12:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.999544 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.999613 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.999625 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.999644 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:05 crc kubenswrapper[4930]: I1124 12:00:05.999657 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:05Z","lastTransitionTime":"2025-11-24T12:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.083916 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:06 crc kubenswrapper[4930]: E1124 12:00:06.084040 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.102642 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.102711 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.102724 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.102741 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.102753 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:06Z","lastTransitionTime":"2025-11-24T12:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.205206 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.205244 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.205256 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.205273 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.205286 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:06Z","lastTransitionTime":"2025-11-24T12:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.308236 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.308268 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.308279 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.308295 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.308308 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:06Z","lastTransitionTime":"2025-11-24T12:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.410898 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.410944 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.411042 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.411055 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.411064 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:06Z","lastTransitionTime":"2025-11-24T12:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.513681 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.513723 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.513736 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.513755 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.513766 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:06Z","lastTransitionTime":"2025-11-24T12:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.616072 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.616119 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.616131 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.616149 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.616162 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:06Z","lastTransitionTime":"2025-11-24T12:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.718572 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.718619 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.718634 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.718652 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.718664 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:06Z","lastTransitionTime":"2025-11-24T12:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.821762 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.822068 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.822139 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.822215 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.822317 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:06Z","lastTransitionTime":"2025-11-24T12:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.924984 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.925587 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.925675 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.925769 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:06 crc kubenswrapper[4930]: I1124 12:00:06.925840 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:06Z","lastTransitionTime":"2025-11-24T12:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.028689 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.028755 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.028778 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.028808 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.028832 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:07Z","lastTransitionTime":"2025-11-24T12:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.084084 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.084212 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:07 crc kubenswrapper[4930]: E1124 12:00:07.084322 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.084336 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:07 crc kubenswrapper[4930]: E1124 12:00:07.084452 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:07 crc kubenswrapper[4930]: E1124 12:00:07.084614 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.131425 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.131461 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.131469 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.131484 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.131494 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:07Z","lastTransitionTime":"2025-11-24T12:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.233630 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.233672 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.233683 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.233699 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.233710 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:07Z","lastTransitionTime":"2025-11-24T12:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.335875 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.336245 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.336345 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.336442 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.336546 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:07Z","lastTransitionTime":"2025-11-24T12:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.438620 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.438690 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.438700 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.438732 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.438744 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:07Z","lastTransitionTime":"2025-11-24T12:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.541740 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.542058 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.542147 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.542256 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.542344 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:07Z","lastTransitionTime":"2025-11-24T12:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.644772 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.645104 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.645279 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.645398 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.645508 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:07Z","lastTransitionTime":"2025-11-24T12:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.748774 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.748818 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.748856 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.748877 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.748888 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:07Z","lastTransitionTime":"2025-11-24T12:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.851245 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.851297 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.851309 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.851327 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.851340 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:07Z","lastTransitionTime":"2025-11-24T12:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.953857 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.954454 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.954655 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.954699 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:07 crc kubenswrapper[4930]: I1124 12:00:07.954759 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:07Z","lastTransitionTime":"2025-11-24T12:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.058369 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.058430 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.058445 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.058468 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.058480 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:08Z","lastTransitionTime":"2025-11-24T12:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.083808 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:08 crc kubenswrapper[4930]: E1124 12:00:08.083953 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.161048 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.161085 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.161095 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.161110 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.161120 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:08Z","lastTransitionTime":"2025-11-24T12:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.263939 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.264264 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.264356 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.264448 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.264556 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:08Z","lastTransitionTime":"2025-11-24T12:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.366440 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.366486 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.366497 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.366512 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.366522 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:08Z","lastTransitionTime":"2025-11-24T12:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.469032 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.469086 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.469102 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.469125 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.469142 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:08Z","lastTransitionTime":"2025-11-24T12:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.572177 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.572224 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.572236 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.572254 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.572266 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:08Z","lastTransitionTime":"2025-11-24T12:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.679172 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.679210 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.679218 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.679232 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.679242 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:08Z","lastTransitionTime":"2025-11-24T12:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.781796 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.781833 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.781843 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.781855 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.781864 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:08Z","lastTransitionTime":"2025-11-24T12:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.883581 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.883620 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.883631 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.883652 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.883669 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:08Z","lastTransitionTime":"2025-11-24T12:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.986463 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.986529 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.986572 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.986595 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:08 crc kubenswrapper[4930]: I1124 12:00:08.986612 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:08Z","lastTransitionTime":"2025-11-24T12:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.084572 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.084649 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:09 crc kubenswrapper[4930]: E1124 12:00:09.084709 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:09 crc kubenswrapper[4930]: E1124 12:00:09.084812 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.085039 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:09 crc kubenswrapper[4930]: E1124 12:00:09.085218 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.089102 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.089134 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.089145 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.089159 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.089170 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.191524 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.191572 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.191580 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.191594 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.191608 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.293575 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.293609 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.293618 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.293631 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.293640 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.364786 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.364830 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.364838 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.364854 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.364863 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: E1124 12:00:09.376402 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.379436 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.379488 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.379500 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.379515 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.379526 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: E1124 12:00:09.389821 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.393782 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.393808 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.393817 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.393830 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.393838 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: E1124 12:00:09.403345 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.406061 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.406089 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.406097 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.406110 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.406118 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: E1124 12:00:09.416134 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.418863 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.418888 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.418896 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.418909 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.418919 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: E1124 12:00:09.430004 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:09 crc kubenswrapper[4930]: E1124 12:00:09.430174 4930 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.431592 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.431641 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.431654 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.431671 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.431684 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.534273 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.534307 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.534315 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.534328 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.534336 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.637292 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.637322 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.637333 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.637348 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.637358 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.739623 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.739654 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.739671 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.739686 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.739696 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.842513 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.842560 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.842571 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.842585 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.842594 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.944568 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.944613 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.944627 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.944644 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:09 crc kubenswrapper[4930]: I1124 12:00:09.944656 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:09Z","lastTransitionTime":"2025-11-24T12:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.047261 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.047286 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.047294 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.047305 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.047313 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:10Z","lastTransitionTime":"2025-11-24T12:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.084824 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:10 crc kubenswrapper[4930]: E1124 12:00:10.084951 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.149039 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.149071 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.149079 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.149092 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.149100 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:10Z","lastTransitionTime":"2025-11-24T12:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.251027 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.251085 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.251095 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.251109 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.251120 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:10Z","lastTransitionTime":"2025-11-24T12:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.352846 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.352879 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.352888 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.352900 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.352908 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:10Z","lastTransitionTime":"2025-11-24T12:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.457187 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.457254 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.457277 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.457309 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.457334 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:10Z","lastTransitionTime":"2025-11-24T12:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.560469 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.560506 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.560517 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.560532 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.560547 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:10Z","lastTransitionTime":"2025-11-24T12:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.664467 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.664576 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.664602 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.664633 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.664655 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:10Z","lastTransitionTime":"2025-11-24T12:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.768064 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.768419 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.768710 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.768914 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.769228 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:10Z","lastTransitionTime":"2025-11-24T12:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.871846 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.872101 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.872174 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.872244 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.872303 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:10Z","lastTransitionTime":"2025-11-24T12:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.974718 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.975040 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.975279 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.975511 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:10 crc kubenswrapper[4930]: I1124 12:00:10.975837 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:10Z","lastTransitionTime":"2025-11-24T12:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.078750 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.078798 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.078812 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.078829 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.078840 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:11Z","lastTransitionTime":"2025-11-24T12:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.083976 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:11 crc kubenswrapper[4930]: E1124 12:00:11.084155 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.084044 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.084009 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:11 crc kubenswrapper[4930]: E1124 12:00:11.084478 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:11 crc kubenswrapper[4930]: E1124 12:00:11.084338 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.181707 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.181745 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.181756 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.181772 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.181785 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:11Z","lastTransitionTime":"2025-11-24T12:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.284104 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.284157 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.284173 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.284194 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.284210 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:11Z","lastTransitionTime":"2025-11-24T12:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.387161 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.387237 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.387249 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.387265 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.387278 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:11Z","lastTransitionTime":"2025-11-24T12:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.489767 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.489825 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.489837 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.489853 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.489866 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:11Z","lastTransitionTime":"2025-11-24T12:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.592107 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.592147 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.592157 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.592170 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.592180 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:11Z","lastTransitionTime":"2025-11-24T12:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.695373 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.695408 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.695418 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.695432 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.695442 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:11Z","lastTransitionTime":"2025-11-24T12:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.797849 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.797887 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.797896 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.797927 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.797940 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:11Z","lastTransitionTime":"2025-11-24T12:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.900844 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.900917 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.900927 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.900941 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:11 crc kubenswrapper[4930]: I1124 12:00:11.900951 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:11Z","lastTransitionTime":"2025-11-24T12:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.003214 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.003263 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.003276 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.003291 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.003302 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:12Z","lastTransitionTime":"2025-11-24T12:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.083935 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:12 crc kubenswrapper[4930]: E1124 12:00:12.084106 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.105478 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.105764 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.105907 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.106005 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.106092 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:12Z","lastTransitionTime":"2025-11-24T12:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.208771 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.209015 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.209076 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.209142 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.209220 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:12Z","lastTransitionTime":"2025-11-24T12:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.312872 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.312939 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.312969 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.312989 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.313001 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:12Z","lastTransitionTime":"2025-11-24T12:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.415749 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.415793 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.415825 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.415844 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.415856 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:12Z","lastTransitionTime":"2025-11-24T12:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.518505 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.518568 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.518577 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.518597 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.518607 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:12Z","lastTransitionTime":"2025-11-24T12:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.622074 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.622136 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.622161 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.622190 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.622211 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:12Z","lastTransitionTime":"2025-11-24T12:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.726022 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.726062 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.726075 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.726090 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.726099 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:12Z","lastTransitionTime":"2025-11-24T12:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.828118 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.828155 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.828168 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.828186 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.828197 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:12Z","lastTransitionTime":"2025-11-24T12:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.930964 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.931001 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.931010 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.931025 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:12 crc kubenswrapper[4930]: I1124 12:00:12.931034 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:12Z","lastTransitionTime":"2025-11-24T12:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.032846 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.033174 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.033243 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.033307 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.033373 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:13Z","lastTransitionTime":"2025-11-24T12:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.084035 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.084116 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.084196 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:13 crc kubenswrapper[4930]: E1124 12:00:13.084201 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:13 crc kubenswrapper[4930]: E1124 12:00:13.084282 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:13 crc kubenswrapper[4930]: E1124 12:00:13.084340 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.136171 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.136198 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.136206 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.136219 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.136227 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:13Z","lastTransitionTime":"2025-11-24T12:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.238400 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.238437 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.238448 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.238464 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.238475 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:13Z","lastTransitionTime":"2025-11-24T12:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.340118 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.340145 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.340153 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.340165 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.340173 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:13Z","lastTransitionTime":"2025-11-24T12:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.442401 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.442439 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.442447 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.442461 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.442469 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:13Z","lastTransitionTime":"2025-11-24T12:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.544279 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.544316 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.544325 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.544339 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.544349 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:13Z","lastTransitionTime":"2025-11-24T12:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.646427 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.646474 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.646484 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.646500 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.646511 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:13Z","lastTransitionTime":"2025-11-24T12:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.749220 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.749264 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.749274 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.749286 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.749295 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:13Z","lastTransitionTime":"2025-11-24T12:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.852777 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.853032 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.853140 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.853218 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.853293 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:13Z","lastTransitionTime":"2025-11-24T12:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.956363 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.956439 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.956453 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.956486 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:13 crc kubenswrapper[4930]: I1124 12:00:13.956500 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:13Z","lastTransitionTime":"2025-11-24T12:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.059183 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.059232 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.059244 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.059263 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.059284 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:14Z","lastTransitionTime":"2025-11-24T12:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.084505 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:14 crc kubenswrapper[4930]: E1124 12:00:14.084789 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.085758 4930 scope.go:117] "RemoveContainer" containerID="b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e" Nov 24 12:00:14 crc kubenswrapper[4930]: E1124 12:00:14.086128 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.110983 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.132014 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"ller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:59.940653 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:59:59.940962 6580 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 11:59:59.940986 6580 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 11:59:59.941134 6580 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:59:59.944340 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:59.944405 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:59.946174 6580 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:59:59.946310 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:59:59.946330 6580 factory.go:656] Stopping watch factory\\\\nI1124 11:59:59.946346 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:59.946403 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:59:59.946505 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.150236 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.163051 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.163149 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.163174 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.163205 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.163226 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:14Z","lastTransitionTime":"2025-11-24T12:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.163761 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.177518 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74710830-f474-458e-b871-0b7be860bdad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.195123 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.211160 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.226687 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.244595 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.258299 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.265855 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.266088 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.266345 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.266646 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.266792 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:14Z","lastTransitionTime":"2025-11-24T12:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.271661 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.283793 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.332838 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.350108 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.363955 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.369283 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.369344 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.369359 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.369385 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.369400 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:14Z","lastTransitionTime":"2025-11-24T12:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.380774 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.396695 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.472703 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.472770 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.472784 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.472809 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.472824 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:14Z","lastTransitionTime":"2025-11-24T12:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.577114 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.577577 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.577592 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.577619 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.577635 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:14Z","lastTransitionTime":"2025-11-24T12:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.680811 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.680869 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.680885 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.680910 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.680931 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:14Z","lastTransitionTime":"2025-11-24T12:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.784437 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.784624 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.784649 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.784711 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.784729 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:14Z","lastTransitionTime":"2025-11-24T12:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.887501 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.887588 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.887603 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.887630 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.887650 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:14Z","lastTransitionTime":"2025-11-24T12:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.990053 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.990103 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.990115 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.990135 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:14 crc kubenswrapper[4930]: I1124 12:00:14.990147 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:14Z","lastTransitionTime":"2025-11-24T12:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.083509 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.083575 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:15 crc kubenswrapper[4930]: E1124 12:00:15.083668 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:15 crc kubenswrapper[4930]: E1124 12:00:15.083794 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.083777 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:15 crc kubenswrapper[4930]: E1124 12:00:15.083945 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.092267 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.092324 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.092339 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.092363 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.092382 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:15Z","lastTransitionTime":"2025-11-24T12:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.195175 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.195223 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.195241 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.195261 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.195272 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:15Z","lastTransitionTime":"2025-11-24T12:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.299164 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.299212 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.299221 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.299236 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.299246 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:15Z","lastTransitionTime":"2025-11-24T12:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.402167 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.402417 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.402514 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.402620 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.402692 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:15Z","lastTransitionTime":"2025-11-24T12:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.505488 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.505577 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.505596 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.505626 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.505841 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:15Z","lastTransitionTime":"2025-11-24T12:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.609248 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.609681 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.609783 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.609878 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.609965 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:15Z","lastTransitionTime":"2025-11-24T12:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.726812 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.726856 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.726867 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.726884 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.726895 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:15Z","lastTransitionTime":"2025-11-24T12:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.830037 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.830509 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.830606 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.830676 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.830744 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:15Z","lastTransitionTime":"2025-11-24T12:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.933753 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.933878 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.933923 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.933941 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:15 crc kubenswrapper[4930]: I1124 12:00:15.933952 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:15Z","lastTransitionTime":"2025-11-24T12:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.037051 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.037079 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.037087 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.037102 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.037111 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:16Z","lastTransitionTime":"2025-11-24T12:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.083786 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:16 crc kubenswrapper[4930]: E1124 12:00:16.083936 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.139271 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.139321 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.139332 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.139352 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.139365 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:16Z","lastTransitionTime":"2025-11-24T12:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.242052 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.242106 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.242117 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.242136 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.242148 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:16Z","lastTransitionTime":"2025-11-24T12:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.345168 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.345208 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.345220 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.345237 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.345248 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:16Z","lastTransitionTime":"2025-11-24T12:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.447504 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.447582 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.447591 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.447609 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.447617 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:16Z","lastTransitionTime":"2025-11-24T12:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.550399 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.550432 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.550441 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.550453 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.550462 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:16Z","lastTransitionTime":"2025-11-24T12:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.653694 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.653751 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.653765 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.653781 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.653793 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:16Z","lastTransitionTime":"2025-11-24T12:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.756047 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.756098 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.756127 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.756143 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.756153 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:16Z","lastTransitionTime":"2025-11-24T12:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.859718 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.859777 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.859792 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.859816 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.859830 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:16Z","lastTransitionTime":"2025-11-24T12:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.962998 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.963066 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.963084 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.963106 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:16 crc kubenswrapper[4930]: I1124 12:00:16.963119 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:16Z","lastTransitionTime":"2025-11-24T12:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.066131 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.066176 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.066192 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.066218 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.066240 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:17Z","lastTransitionTime":"2025-11-24T12:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.100146 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:17 crc kubenswrapper[4930]: E1124 12:00:17.100424 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.100457 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.100529 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:17 crc kubenswrapper[4930]: E1124 12:00:17.100639 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:17 crc kubenswrapper[4930]: E1124 12:00:17.100705 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.169525 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.169635 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.169658 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.169690 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.169711 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:17Z","lastTransitionTime":"2025-11-24T12:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.273306 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.273382 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.273406 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.273439 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.273466 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:17Z","lastTransitionTime":"2025-11-24T12:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.377055 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.377107 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.377119 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.377137 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.377149 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:17Z","lastTransitionTime":"2025-11-24T12:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.479713 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.479804 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.479829 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.479868 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.479895 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:17Z","lastTransitionTime":"2025-11-24T12:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.583036 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.583118 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.583144 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.583177 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.583200 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:17Z","lastTransitionTime":"2025-11-24T12:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.685822 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.685879 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.685892 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.685909 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.685923 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:17Z","lastTransitionTime":"2025-11-24T12:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.789988 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.790052 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.790065 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.790085 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.790096 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:17Z","lastTransitionTime":"2025-11-24T12:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.892904 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.892961 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.892974 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.892992 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.893028 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:17Z","lastTransitionTime":"2025-11-24T12:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.996285 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.996340 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.996352 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.996369 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:17 crc kubenswrapper[4930]: I1124 12:00:17.996380 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:17Z","lastTransitionTime":"2025-11-24T12:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.084293 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:18 crc kubenswrapper[4930]: E1124 12:00:18.084608 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.099732 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.099809 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.099837 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.099873 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.099899 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:18Z","lastTransitionTime":"2025-11-24T12:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.203482 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.203565 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.203580 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.203604 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.203620 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:18Z","lastTransitionTime":"2025-11-24T12:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.306093 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.306141 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.306153 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.306172 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.306183 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:18Z","lastTransitionTime":"2025-11-24T12:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.408210 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.408252 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.408261 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.408275 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.408286 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:18Z","lastTransitionTime":"2025-11-24T12:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.511152 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.511197 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.511209 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.511224 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.511237 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:18Z","lastTransitionTime":"2025-11-24T12:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.613172 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.613448 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.613514 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.613611 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.613712 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:18Z","lastTransitionTime":"2025-11-24T12:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.716058 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.716090 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.716097 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.716109 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.716119 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:18Z","lastTransitionTime":"2025-11-24T12:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.818157 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.818205 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.818217 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.818235 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.818246 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:18Z","lastTransitionTime":"2025-11-24T12:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.920016 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.920042 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.920051 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.920063 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:18 crc kubenswrapper[4930]: I1124 12:00:18.920071 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:18Z","lastTransitionTime":"2025-11-24T12:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.023325 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.023400 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.023419 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.023451 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.023470 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.084328 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.084476 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:19 crc kubenswrapper[4930]: E1124 12:00:19.084504 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.084581 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:19 crc kubenswrapper[4930]: E1124 12:00:19.084688 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:19 crc kubenswrapper[4930]: E1124 12:00:19.084772 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.126016 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.126058 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.126068 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.126086 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.126096 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.229263 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.229306 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.229317 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.229333 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.229344 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.334072 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.334674 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.334873 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.335051 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.335212 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.439168 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.439491 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.439579 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.439686 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.439779 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.541908 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.542232 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.542400 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.542519 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.542713 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.645136 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.645176 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.645192 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.645211 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.645224 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.747344 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.747383 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.747392 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.747405 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.747415 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.800859 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.800889 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.800900 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.800914 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.800922 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: E1124 12:00:19.811979 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.814981 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.815002 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.815011 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.815024 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.815032 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: E1124 12:00:19.825422 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.831704 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.831735 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.831744 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.831760 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.831771 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: E1124 12:00:19.844093 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.847129 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.847157 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.847169 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.847206 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.847217 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: E1124 12:00:19.859449 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.862155 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.862182 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.862192 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.862205 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.862214 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: E1124 12:00:19.872482 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:19 crc kubenswrapper[4930]: E1124 12:00:19.875357 4930 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.877235 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.877272 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.877282 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.877296 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.877307 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.979982 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.980018 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.980027 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.980041 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:19 crc kubenswrapper[4930]: I1124 12:00:19.980051 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:19Z","lastTransitionTime":"2025-11-24T12:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.031852 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:20 crc kubenswrapper[4930]: E1124 12:00:20.032020 4930 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:00:20 crc kubenswrapper[4930]: E1124 12:00:20.032078 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs podName:96ced043-6cad-4f17-8648-624f36bf14f1 nodeName:}" failed. No retries permitted until 2025-11-24 12:00:52.032061924 +0000 UTC m=+98.646389874 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs") pod "network-metrics-daemon-r4jtv" (UID: "96ced043-6cad-4f17-8648-624f36bf14f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.082636 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.082690 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.082699 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.082733 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.082744 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:20Z","lastTransitionTime":"2025-11-24T12:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.084096 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:20 crc kubenswrapper[4930]: E1124 12:00:20.084226 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.184894 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.184940 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.184953 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.184969 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.184981 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:20Z","lastTransitionTime":"2025-11-24T12:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.287742 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.287792 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.287804 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.287820 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.287833 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:20Z","lastTransitionTime":"2025-11-24T12:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.389746 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.389782 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.389790 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.389802 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.389810 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:20Z","lastTransitionTime":"2025-11-24T12:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.491581 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.491616 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.491628 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.491644 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.491655 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:20Z","lastTransitionTime":"2025-11-24T12:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.594014 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.594050 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.594060 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.594074 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.594084 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:20Z","lastTransitionTime":"2025-11-24T12:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.696296 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.696342 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.696353 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.696369 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.696380 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:20Z","lastTransitionTime":"2025-11-24T12:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.798772 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.798821 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.798836 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.798853 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.798935 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:20Z","lastTransitionTime":"2025-11-24T12:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.902196 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.902300 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.902312 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.902339 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:20 crc kubenswrapper[4930]: I1124 12:00:20.902353 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:20Z","lastTransitionTime":"2025-11-24T12:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.004465 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.004502 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.004511 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.004525 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.004533 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:21Z","lastTransitionTime":"2025-11-24T12:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.084650 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.084692 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.084765 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:21 crc kubenswrapper[4930]: E1124 12:00:21.085034 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:21 crc kubenswrapper[4930]: E1124 12:00:21.085136 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:21 crc kubenswrapper[4930]: E1124 12:00:21.085215 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.106834 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.106877 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.106887 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.106903 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.106913 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:21Z","lastTransitionTime":"2025-11-24T12:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.209918 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.209971 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.209981 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.209995 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.210007 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:21Z","lastTransitionTime":"2025-11-24T12:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.312260 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.312295 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.312307 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.312323 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.312336 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:21Z","lastTransitionTime":"2025-11-24T12:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.415234 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.415278 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.415290 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.415311 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.415324 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:21Z","lastTransitionTime":"2025-11-24T12:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.492931 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5lvxv_68c34ffc-f1cd-4828-b83c-22bd0c02f364/kube-multus/0.log" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.492994 4930 generic.go:334] "Generic (PLEG): container finished" podID="68c34ffc-f1cd-4828-b83c-22bd0c02f364" containerID="d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336" exitCode=1 Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.493034 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5lvxv" event={"ID":"68c34ffc-f1cd-4828-b83c-22bd0c02f364","Type":"ContainerDied","Data":"d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.493710 4930 scope.go:117] "RemoveContainer" containerID="d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.509188 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.517570 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.517600 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.517613 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.517629 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.517640 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:21Z","lastTransitionTime":"2025-11-24T12:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.524315 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.537249 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.548007 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.562354 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.574237 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.583965 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.597412 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.609772 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.619946 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.619989 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.620001 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.620016 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.620027 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:21Z","lastTransitionTime":"2025-11-24T12:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.625769 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"2025-11-24T11:59:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813\\\\n2025-11-24T11:59:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813 to /host/opt/cni/bin/\\\\n2025-11-24T11:59:36Z [verbose] multus-daemon started\\\\n2025-11-24T11:59:36Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:00:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.645755 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"ller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:59.940653 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:59:59.940962 6580 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 11:59:59.940986 6580 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 11:59:59.941134 6580 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:59:59.944340 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:59.944405 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:59.946174 6580 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:59:59.946310 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:59:59.946330 6580 factory.go:656] Stopping watch factory\\\\nI1124 11:59:59.946346 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:59.946403 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:59:59.946505 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.661300 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.671751 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.683351 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74710830-f474-458e-b871-0b7be860bdad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.697839 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.707615 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.718291 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.721973 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.721999 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.722007 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.722020 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.722028 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:21Z","lastTransitionTime":"2025-11-24T12:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.824380 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.824433 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.824479 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.824498 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.824509 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:21Z","lastTransitionTime":"2025-11-24T12:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.926913 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.926970 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.926980 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.926998 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:21 crc kubenswrapper[4930]: I1124 12:00:21.927008 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:21Z","lastTransitionTime":"2025-11-24T12:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.030284 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.030341 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.030354 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.030374 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.030386 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:22Z","lastTransitionTime":"2025-11-24T12:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.083927 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:22 crc kubenswrapper[4930]: E1124 12:00:22.084080 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.132939 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.132996 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.133006 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.133028 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.133042 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:22Z","lastTransitionTime":"2025-11-24T12:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.235485 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.235531 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.235560 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.235576 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.235587 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:22Z","lastTransitionTime":"2025-11-24T12:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.338503 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.338574 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.338584 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.338599 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.338607 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:22Z","lastTransitionTime":"2025-11-24T12:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.441445 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.441509 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.441522 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.441559 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.441569 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:22Z","lastTransitionTime":"2025-11-24T12:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.498242 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5lvxv_68c34ffc-f1cd-4828-b83c-22bd0c02f364/kube-multus/0.log" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.498307 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5lvxv" event={"ID":"68c34ffc-f1cd-4828-b83c-22bd0c02f364","Type":"ContainerStarted","Data":"c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.513446 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.525669 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.538084 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.543113 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.543138 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.543148 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.543160 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.543169 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:22Z","lastTransitionTime":"2025-11-24T12:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.554966 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.568571 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.578988 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.587918 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.599038 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.609257 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.625851 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"ller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:59.940653 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:59:59.940962 6580 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 11:59:59.940986 6580 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 11:59:59.941134 6580 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:59:59.944340 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:59.944405 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:59.946174 6580 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:59:59.946310 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:59:59.946330 6580 factory.go:656] Stopping watch factory\\\\nI1124 11:59:59.946346 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:59.946403 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:59:59.946505 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.639679 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.645122 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.645156 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.645166 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.645182 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.645191 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:22Z","lastTransitionTime":"2025-11-24T12:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.651621 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.663877 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74710830-f474-458e-b871-0b7be860bdad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.679067 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.693506 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.706863 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.721138 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"2025-11-24T11:59:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813\\\\n2025-11-24T11:59:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813 to /host/opt/cni/bin/\\\\n2025-11-24T11:59:36Z [verbose] multus-daemon started\\\\n2025-11-24T11:59:36Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:00:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.747157 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.747203 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.747212 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.747229 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.747242 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:22Z","lastTransitionTime":"2025-11-24T12:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.849317 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.849360 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.849370 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.849384 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.849393 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:22Z","lastTransitionTime":"2025-11-24T12:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.951897 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.951950 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.951964 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.951983 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:22 crc kubenswrapper[4930]: I1124 12:00:22.951997 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:22Z","lastTransitionTime":"2025-11-24T12:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.054870 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.055634 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.055650 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.055666 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.055675 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:23Z","lastTransitionTime":"2025-11-24T12:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.084502 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.084501 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:23 crc kubenswrapper[4930]: E1124 12:00:23.084722 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.084516 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:23 crc kubenswrapper[4930]: E1124 12:00:23.084805 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:23 crc kubenswrapper[4930]: E1124 12:00:23.084631 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.157502 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.157557 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.157569 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.157585 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.157595 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:23Z","lastTransitionTime":"2025-11-24T12:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.260061 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.260098 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.260106 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.260119 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.260128 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:23Z","lastTransitionTime":"2025-11-24T12:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.361756 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.361840 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.361852 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.361869 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.361882 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:23Z","lastTransitionTime":"2025-11-24T12:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.464963 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.465009 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.465018 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.465032 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.465041 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:23Z","lastTransitionTime":"2025-11-24T12:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.567180 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.567227 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.567235 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.567249 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.567258 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:23Z","lastTransitionTime":"2025-11-24T12:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.670131 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.670183 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.670196 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.670213 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.670251 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:23Z","lastTransitionTime":"2025-11-24T12:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.772468 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.772778 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.772882 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.772945 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.773010 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:23Z","lastTransitionTime":"2025-11-24T12:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.875655 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.876057 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.876153 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.876288 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.876398 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:23Z","lastTransitionTime":"2025-11-24T12:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.979243 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.979293 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.979307 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.979332 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:23 crc kubenswrapper[4930]: I1124 12:00:23.979347 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:23Z","lastTransitionTime":"2025-11-24T12:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.081868 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.081916 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.081926 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.081940 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.081949 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:24Z","lastTransitionTime":"2025-11-24T12:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.084161 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:24 crc kubenswrapper[4930]: E1124 12:00:24.084270 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.113623 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.135042 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.148962 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.159793 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.175639 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.186437 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.187078 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.187430 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.187669 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.187946 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:24Z","lastTransitionTime":"2025-11-24T12:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.194229 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.206675 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.220225 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74710830-f474-458e-b871-0b7be860bdad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.233658 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.246264 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.259152 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.271989 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"2025-11-24T11:59:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813\\\\n2025-11-24T11:59:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813 to /host/opt/cni/bin/\\\\n2025-11-24T11:59:36Z [verbose] multus-daemon started\\\\n2025-11-24T11:59:36Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:00:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.290326 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.290563 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.290697 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.290819 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.291039 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:24Z","lastTransitionTime":"2025-11-24T12:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.292951 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"ller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:59.940653 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:59:59.940962 6580 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 11:59:59.940986 6580 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 11:59:59.941134 6580 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:59:59.944340 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:59.944405 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:59.946174 6580 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:59:59.946310 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:59:59.946330 6580 factory.go:656] Stopping watch factory\\\\nI1124 11:59:59.946346 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:59.946403 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:59:59.946505 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.307392 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.320607 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.335285 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.348961 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.395149 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.395186 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.395197 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.395214 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.395228 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:24Z","lastTransitionTime":"2025-11-24T12:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.497678 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.497729 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.497738 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.497755 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.497765 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:24Z","lastTransitionTime":"2025-11-24T12:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.600111 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.600151 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.600185 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.600198 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.600209 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:24Z","lastTransitionTime":"2025-11-24T12:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.702875 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.702958 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.702976 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.703003 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.703020 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:24Z","lastTransitionTime":"2025-11-24T12:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.805340 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.805376 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.805386 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.805400 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.805412 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:24Z","lastTransitionTime":"2025-11-24T12:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.908499 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.908574 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.908587 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.908603 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:24 crc kubenswrapper[4930]: I1124 12:00:24.908615 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:24Z","lastTransitionTime":"2025-11-24T12:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.011634 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.011710 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.011730 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.011762 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.011783 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:25Z","lastTransitionTime":"2025-11-24T12:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.084329 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.084366 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.084491 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:25 crc kubenswrapper[4930]: E1124 12:00:25.084553 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:25 crc kubenswrapper[4930]: E1124 12:00:25.084621 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:25 crc kubenswrapper[4930]: E1124 12:00:25.084699 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.114546 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.114586 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.114596 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.114610 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.114621 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:25Z","lastTransitionTime":"2025-11-24T12:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.217190 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.217230 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.217240 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.217252 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.217260 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:25Z","lastTransitionTime":"2025-11-24T12:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.320659 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.320724 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.320743 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.320774 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.320795 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:25Z","lastTransitionTime":"2025-11-24T12:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.425091 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.425147 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.425157 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.425173 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.425184 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:25Z","lastTransitionTime":"2025-11-24T12:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.527632 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.527686 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.527698 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.527718 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.527735 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:25Z","lastTransitionTime":"2025-11-24T12:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.630103 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.630151 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.630160 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.630174 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.630184 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:25Z","lastTransitionTime":"2025-11-24T12:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.773098 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.773144 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.773157 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.773174 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.773187 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:25Z","lastTransitionTime":"2025-11-24T12:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.876681 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.876757 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.876781 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.876805 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.876822 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:25Z","lastTransitionTime":"2025-11-24T12:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.979267 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.979331 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.979342 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.979361 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:25 crc kubenswrapper[4930]: I1124 12:00:25.979377 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:25Z","lastTransitionTime":"2025-11-24T12:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.081663 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.081712 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.081723 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.081740 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.081756 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:26Z","lastTransitionTime":"2025-11-24T12:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.083573 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:26 crc kubenswrapper[4930]: E1124 12:00:26.083713 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.084684 4930 scope.go:117] "RemoveContainer" containerID="b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.183815 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.183849 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.183859 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.183871 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.183881 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:26Z","lastTransitionTime":"2025-11-24T12:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.286439 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.286468 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.286478 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.286492 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.286501 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:26Z","lastTransitionTime":"2025-11-24T12:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.389816 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.389862 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.389875 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.389893 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.389903 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:26Z","lastTransitionTime":"2025-11-24T12:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.492630 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.492713 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.492724 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.492742 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.492753 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:26Z","lastTransitionTime":"2025-11-24T12:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.513966 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/2.log" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.516558 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051"} Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.517056 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.540010 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.552497 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.566387 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.586367 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.595504 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.595562 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.595587 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.595609 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.595624 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:26Z","lastTransitionTime":"2025-11-24T12:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.603304 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.616150 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.627764 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.649356 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.664658 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.678700 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74710830-f474-458e-b871-0b7be860bdad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.691415 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.698358 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.698389 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.698399 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.698416 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.698427 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:26Z","lastTransitionTime":"2025-11-24T12:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.703352 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.715657 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.728258 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"2025-11-24T11:59:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813\\\\n2025-11-24T11:59:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813 to /host/opt/cni/bin/\\\\n2025-11-24T11:59:36Z [verbose] multus-daemon started\\\\n2025-11-24T11:59:36Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:00:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.746986 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"ller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:59.940653 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:59:59.940962 6580 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 11:59:59.940986 6580 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 11:59:59.941134 6580 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:59:59.944340 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:59.944405 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:59.946174 6580 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:59:59.946310 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:59:59.946330 6580 factory.go:656] Stopping watch factory\\\\nI1124 11:59:59.946346 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:59.946403 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:59:59.946505 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.761755 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.774665 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.800709 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.800762 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.800776 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.800796 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.800809 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:26Z","lastTransitionTime":"2025-11-24T12:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.902885 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.902919 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.902927 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.902939 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:26 crc kubenswrapper[4930]: I1124 12:00:26.902948 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:26Z","lastTransitionTime":"2025-11-24T12:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.006587 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.006636 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.006648 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.006667 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.006680 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:27Z","lastTransitionTime":"2025-11-24T12:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.084241 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:27 crc kubenswrapper[4930]: E1124 12:00:27.084371 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.084575 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:27 crc kubenswrapper[4930]: E1124 12:00:27.084638 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.084811 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:27 crc kubenswrapper[4930]: E1124 12:00:27.084959 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.109521 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.109577 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.109588 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.109603 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.109615 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:27Z","lastTransitionTime":"2025-11-24T12:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.211477 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.211526 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.211555 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.211569 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.211579 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:27Z","lastTransitionTime":"2025-11-24T12:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.314457 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.314494 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.314503 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.314517 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.314528 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:27Z","lastTransitionTime":"2025-11-24T12:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.417030 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.417070 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.417078 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.417091 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.417100 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:27Z","lastTransitionTime":"2025-11-24T12:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.519050 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.519092 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.519104 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.519118 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.519128 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:27Z","lastTransitionTime":"2025-11-24T12:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.520183 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/3.log" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.520815 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/2.log" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.523587 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051" exitCode=1 Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.523628 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.523662 4930 scope.go:117] "RemoveContainer" containerID="b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.524326 4930 scope.go:117] "RemoveContainer" containerID="56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051" Nov 24 12:00:27 crc kubenswrapper[4930]: E1124 12:00:27.524598 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.539146 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.551968 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74710830-f474-458e-b871-0b7be860bdad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.564578 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.577491 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.592493 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.606414 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"2025-11-24T11:59:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813\\\\n2025-11-24T11:59:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813 to /host/opt/cni/bin/\\\\n2025-11-24T11:59:36Z [verbose] multus-daemon started\\\\n2025-11-24T11:59:36Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:00:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.621993 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.622026 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.622036 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.622050 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.622060 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:27Z","lastTransitionTime":"2025-11-24T12:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.626487 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b51196f6b247ded046ee71662f31ccc4d5f407aade4a61adfd51f6a61f2a518e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:59:59Z\\\",\\\"message\\\":\\\"ller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:59:59.940653 6580 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 11:59:59.940962 6580 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 11:59:59.940986 6580 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 11:59:59.941134 6580 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:59:59.944340 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:59:59.944405 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:59:59.946174 6580 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:59:59.946310 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:59:59.946330 6580 factory.go:656] Stopping watch factory\\\\nI1124 11:59:59.946346 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:59:59.946403 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 11:59:59.946505 6580 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:27Z\\\",\\\"message\\\":\\\"red: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:00:27.006921 6972 obj_retry.go:409] Going to retry *v1.Pod resource setup for 13 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-additional-cni-plugins-c8rb7 openshift-multus/network-metrics-daemon-r4jtv openshift-network-operator/iptables-alerter-4ln5h openshift-dns/node-resolver-gfn4n openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk openshift-image-registry/node-ca-8mhdf openshift-multus/multus-5lvxv openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1124 12:00:27.007187 6972 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI1124 12:00:27.007200 6972 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.641330 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.653343 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.665194 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.677789 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.691885 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.702638 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.714306 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.725249 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.725298 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.725314 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.725330 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.725343 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:27Z","lastTransitionTime":"2025-11-24T12:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.727433 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.743570 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.759269 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:27Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.827747 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.828085 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.828171 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.828294 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.828378 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:27Z","lastTransitionTime":"2025-11-24T12:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.931119 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.931412 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.931523 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.931626 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:27 crc kubenswrapper[4930]: I1124 12:00:27.931696 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:27Z","lastTransitionTime":"2025-11-24T12:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.035071 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.035176 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.035197 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.035234 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.035255 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:28Z","lastTransitionTime":"2025-11-24T12:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.084839 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:28 crc kubenswrapper[4930]: E1124 12:00:28.085012 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.138371 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.138436 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.138455 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.138480 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.138497 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:28Z","lastTransitionTime":"2025-11-24T12:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.240979 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.241110 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.241131 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.241467 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.241625 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:28Z","lastTransitionTime":"2025-11-24T12:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.344078 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.344107 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.344130 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.344142 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.344150 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:28Z","lastTransitionTime":"2025-11-24T12:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.446974 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.447031 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.447041 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.447057 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.447066 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:28Z","lastTransitionTime":"2025-11-24T12:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.528526 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/3.log" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.533175 4930 scope.go:117] "RemoveContainer" containerID="56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051" Nov 24 12:00:28 crc kubenswrapper[4930]: E1124 12:00:28.533393 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.546323 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74710830-f474-458e-b871-0b7be860bdad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.548933 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.548968 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.548979 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.548995 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.549031 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:28Z","lastTransitionTime":"2025-11-24T12:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.560877 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.572158 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.587879 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.601284 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"2025-11-24T11:59:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813\\\\n2025-11-24T11:59:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813 to /host/opt/cni/bin/\\\\n2025-11-24T11:59:36Z [verbose] multus-daemon started\\\\n2025-11-24T11:59:36Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:00:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.622267 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:27Z\\\",\\\"message\\\":\\\"red: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:00:27.006921 6972 obj_retry.go:409] Going to retry *v1.Pod resource setup for 13 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-additional-cni-plugins-c8rb7 openshift-multus/network-metrics-daemon-r4jtv openshift-network-operator/iptables-alerter-4ln5h openshift-dns/node-resolver-gfn4n openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk openshift-image-registry/node-ca-8mhdf openshift-multus/multus-5lvxv openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1124 12:00:27.007187 6972 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI1124 12:00:27.007200 6972 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.640444 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.650901 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.650940 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.650950 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.650964 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.650976 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:28Z","lastTransitionTime":"2025-11-24T12:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.653191 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.666515 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.678252 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.690632 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.707591 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.721010 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.733168 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.744577 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.753631 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.753682 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.753696 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.753711 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.753721 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:28Z","lastTransitionTime":"2025-11-24T12:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.756629 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.767677 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.856689 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.857060 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.857189 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.857299 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.857418 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:28Z","lastTransitionTime":"2025-11-24T12:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.959967 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.960319 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.960417 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.960503 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:28 crc kubenswrapper[4930]: I1124 12:00:28.960624 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:28Z","lastTransitionTime":"2025-11-24T12:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.064106 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.064562 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.064749 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.064867 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.065098 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:29Z","lastTransitionTime":"2025-11-24T12:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.083481 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:29 crc kubenswrapper[4930]: E1124 12:00:29.083593 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.083625 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:29 crc kubenswrapper[4930]: E1124 12:00:29.083663 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.083495 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:29 crc kubenswrapper[4930]: E1124 12:00:29.083947 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.168473 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.168562 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.168576 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.168593 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.168610 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:29Z","lastTransitionTime":"2025-11-24T12:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.272676 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.272911 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.273066 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.273235 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.273380 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:29Z","lastTransitionTime":"2025-11-24T12:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.376391 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.376674 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.376767 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.376847 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.376964 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:29Z","lastTransitionTime":"2025-11-24T12:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.480526 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.480571 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.480580 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.480600 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.480609 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:29Z","lastTransitionTime":"2025-11-24T12:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.583009 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.583064 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.583078 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.583098 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.583111 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:29Z","lastTransitionTime":"2025-11-24T12:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.685469 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.685512 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.685522 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.685557 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.685569 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:29Z","lastTransitionTime":"2025-11-24T12:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.787820 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.787877 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.787890 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.787906 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.787919 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:29Z","lastTransitionTime":"2025-11-24T12:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.895440 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.895503 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.895515 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.895554 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.895567 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:29Z","lastTransitionTime":"2025-11-24T12:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.998142 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.998188 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.998198 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.998215 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:29 crc kubenswrapper[4930]: I1124 12:00:29.998227 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:29Z","lastTransitionTime":"2025-11-24T12:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.084026 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:30 crc kubenswrapper[4930]: E1124 12:00:30.084422 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.109411 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.109579 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.109612 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.109623 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.109638 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.109649 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.118630 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.118667 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.118677 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.118715 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.118728 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: E1124 12:00:30.131473 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:30Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.136014 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.136051 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.136064 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.136107 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.136121 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: E1124 12:00:30.150161 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:30Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.154776 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.154808 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.154820 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.154835 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.154861 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: E1124 12:00:30.166162 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:30Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.169723 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.169748 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.169757 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.169772 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.169781 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: E1124 12:00:30.183972 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:30Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.187303 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.187341 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.187350 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.187363 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.187376 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: E1124 12:00:30.200900 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:30Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:30 crc kubenswrapper[4930]: E1124 12:00:30.201006 4930 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.211749 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.211776 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.211783 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.211796 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.211805 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.314295 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.314360 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.314379 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.314499 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.314520 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.416752 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.416794 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.416805 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.416823 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.416838 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.519175 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.519466 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.519568 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.519652 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.519774 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.622534 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.622621 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.622638 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.622663 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.622683 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.724921 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.724953 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.724965 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.725016 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.725027 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.828241 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.828311 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.828333 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.828361 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.828384 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.931016 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.931052 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.931059 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.931071 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:30 crc kubenswrapper[4930]: I1124 12:00:30.931080 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:30Z","lastTransitionTime":"2025-11-24T12:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.033890 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.033946 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.033961 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.033981 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.033993 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:31Z","lastTransitionTime":"2025-11-24T12:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.083980 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.084079 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:31 crc kubenswrapper[4930]: E1124 12:00:31.084160 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:31 crc kubenswrapper[4930]: E1124 12:00:31.084307 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.084344 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:31 crc kubenswrapper[4930]: E1124 12:00:31.084422 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.136474 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.136580 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.136605 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.136635 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.136662 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:31Z","lastTransitionTime":"2025-11-24T12:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.240284 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.240852 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.241062 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.241293 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.241518 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:31Z","lastTransitionTime":"2025-11-24T12:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.343990 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.344272 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.344422 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.344560 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.344678 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:31Z","lastTransitionTime":"2025-11-24T12:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.447853 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.448168 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.448288 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.448402 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.448560 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:31Z","lastTransitionTime":"2025-11-24T12:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.550675 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.550719 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.550730 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.550744 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.550753 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:31Z","lastTransitionTime":"2025-11-24T12:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.653438 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.653485 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.653501 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.653522 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.653560 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:31Z","lastTransitionTime":"2025-11-24T12:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.756323 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.756422 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.756435 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.756454 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.756484 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:31Z","lastTransitionTime":"2025-11-24T12:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.859073 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.859408 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.859507 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.859641 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.859739 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:31Z","lastTransitionTime":"2025-11-24T12:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.961853 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.961892 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.961903 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.961919 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:31 crc kubenswrapper[4930]: I1124 12:00:31.961928 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:31Z","lastTransitionTime":"2025-11-24T12:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.064197 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.064235 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.064246 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.064263 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.064274 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:32Z","lastTransitionTime":"2025-11-24T12:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.083523 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:32 crc kubenswrapper[4930]: E1124 12:00:32.083705 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.166624 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.166700 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.166724 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.166748 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.166764 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:32Z","lastTransitionTime":"2025-11-24T12:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.269088 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.269398 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.269562 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.269689 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.269784 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:32Z","lastTransitionTime":"2025-11-24T12:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.372677 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.372712 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.372723 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.372738 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.372749 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:32Z","lastTransitionTime":"2025-11-24T12:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.475726 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.475783 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.475801 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.475823 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.475842 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:32Z","lastTransitionTime":"2025-11-24T12:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.578812 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.578857 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.578873 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.578891 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.578904 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:32Z","lastTransitionTime":"2025-11-24T12:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.681123 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.681189 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.681210 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.681238 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.681260 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:32Z","lastTransitionTime":"2025-11-24T12:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.783672 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.783719 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.783728 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.783740 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.783749 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:32Z","lastTransitionTime":"2025-11-24T12:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.886456 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.886514 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.886526 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.886569 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.886583 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:32Z","lastTransitionTime":"2025-11-24T12:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.989479 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.989812 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.989903 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.990002 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:32 crc kubenswrapper[4930]: I1124 12:00:32.990092 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:32Z","lastTransitionTime":"2025-11-24T12:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.083812 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.083910 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.083912 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:33 crc kubenswrapper[4930]: E1124 12:00:33.084020 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:33 crc kubenswrapper[4930]: E1124 12:00:33.084259 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:33 crc kubenswrapper[4930]: E1124 12:00:33.084357 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.092849 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.092901 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.092917 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.092939 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.092952 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:33Z","lastTransitionTime":"2025-11-24T12:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.195940 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.195979 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.195991 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.196007 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.196017 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:33Z","lastTransitionTime":"2025-11-24T12:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.299079 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.299463 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.299563 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.299643 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.299741 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:33Z","lastTransitionTime":"2025-11-24T12:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.402806 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.402840 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.402853 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.402871 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.402883 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:33Z","lastTransitionTime":"2025-11-24T12:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.506005 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.506036 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.506045 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.506074 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.506083 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:33Z","lastTransitionTime":"2025-11-24T12:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.608468 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.608598 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.608617 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.608649 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.608670 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:33Z","lastTransitionTime":"2025-11-24T12:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.710922 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.710970 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.710982 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.711000 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.711016 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:33Z","lastTransitionTime":"2025-11-24T12:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.813947 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.814015 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.814038 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.814069 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.814094 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:33Z","lastTransitionTime":"2025-11-24T12:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.917226 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.917294 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.917308 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.917330 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:33 crc kubenswrapper[4930]: I1124 12:00:33.917347 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:33Z","lastTransitionTime":"2025-11-24T12:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.020023 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.020122 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.020143 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.020175 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.020195 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:34Z","lastTransitionTime":"2025-11-24T12:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.084669 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:34 crc kubenswrapper[4930]: E1124 12:00:34.085168 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.101073 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.115949 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.124994 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.125051 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.125069 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.125094 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.125114 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:34Z","lastTransitionTime":"2025-11-24T12:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.137369 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.153755 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"2025-11-24T11:59:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813\\\\n2025-11-24T11:59:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813 to /host/opt/cni/bin/\\\\n2025-11-24T11:59:36Z [verbose] multus-daemon started\\\\n2025-11-24T11:59:36Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:00:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.173821 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:27Z\\\",\\\"message\\\":\\\"red: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:00:27.006921 6972 obj_retry.go:409] Going to retry *v1.Pod resource setup for 13 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-additional-cni-plugins-c8rb7 openshift-multus/network-metrics-daemon-r4jtv openshift-network-operator/iptables-alerter-4ln5h openshift-dns/node-resolver-gfn4n openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk openshift-image-registry/node-ca-8mhdf openshift-multus/multus-5lvxv openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1124 12:00:27.007187 6972 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI1124 12:00:27.007200 6972 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.187330 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.200050 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.214261 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74710830-f474-458e-b871-0b7be860bdad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.225992 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.227574 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.227612 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.227622 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.227636 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.227648 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:34Z","lastTransitionTime":"2025-11-24T12:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.237824 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.251851 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.264519 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.275377 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.286620 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.298803 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.311715 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d733227-cfa3-4bc6-b6e6-9901b4574412\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edfe006f31272340fa98b4821ee0dce6d60014bbfc82c2d9d3eb94ba793804b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65749fc086891c98be90e9567512181dbec456e46a0a1ee4757ea96a8baad5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65749fc086891c98be90e9567512181dbec456e46a0a1ee4757ea96a8baad5f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.325244 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.329979 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.330024 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.330033 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.330050 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.330061 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:34Z","lastTransitionTime":"2025-11-24T12:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.338221 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.433026 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.433480 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.433578 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.433672 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.433751 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:34Z","lastTransitionTime":"2025-11-24T12:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.537901 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.538237 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.538246 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.538264 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.538274 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:34Z","lastTransitionTime":"2025-11-24T12:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.640717 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.640763 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.640775 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.640794 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.640807 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:34Z","lastTransitionTime":"2025-11-24T12:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.744054 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.744110 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.744124 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.744144 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.744160 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:34Z","lastTransitionTime":"2025-11-24T12:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.846372 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.846415 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.846424 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.846439 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.846449 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:34Z","lastTransitionTime":"2025-11-24T12:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.948152 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.948200 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.948212 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.948230 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:34 crc kubenswrapper[4930]: I1124 12:00:34.948246 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:34Z","lastTransitionTime":"2025-11-24T12:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.050971 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.051041 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.051058 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.051082 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.051101 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:35Z","lastTransitionTime":"2025-11-24T12:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.084294 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.084361 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.084361 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:35 crc kubenswrapper[4930]: E1124 12:00:35.084482 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:35 crc kubenswrapper[4930]: E1124 12:00:35.084674 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:35 crc kubenswrapper[4930]: E1124 12:00:35.084746 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.153810 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.153886 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.153906 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.153987 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.154008 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:35Z","lastTransitionTime":"2025-11-24T12:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.257355 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.257416 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.257428 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.257449 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.257465 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:35Z","lastTransitionTime":"2025-11-24T12:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.359810 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.359901 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.359929 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.359968 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.359998 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:35Z","lastTransitionTime":"2025-11-24T12:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.464148 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.464237 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.464273 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.464310 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.464334 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:35Z","lastTransitionTime":"2025-11-24T12:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.567174 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.567351 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.567365 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.567384 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.567394 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:35Z","lastTransitionTime":"2025-11-24T12:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.670747 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.670828 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.670841 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.670861 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.670874 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:35Z","lastTransitionTime":"2025-11-24T12:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.773932 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.773970 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.773978 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.773992 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.774001 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:35Z","lastTransitionTime":"2025-11-24T12:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.877454 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.877568 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.877588 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.877618 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.877637 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:35Z","lastTransitionTime":"2025-11-24T12:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.980153 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.980194 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.980204 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.980219 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:35 crc kubenswrapper[4930]: I1124 12:00:35.980228 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:35Z","lastTransitionTime":"2025-11-24T12:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.083660 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.083683 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:36 crc kubenswrapper[4930]: E1124 12:00:36.083993 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.084051 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.084114 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.084168 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.084201 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:36Z","lastTransitionTime":"2025-11-24T12:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.187663 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.187737 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.187748 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.187767 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.187777 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:36Z","lastTransitionTime":"2025-11-24T12:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.291747 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.291815 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.291830 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.291862 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.291879 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:36Z","lastTransitionTime":"2025-11-24T12:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.394991 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.395071 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.395087 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.395114 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.395131 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:36Z","lastTransitionTime":"2025-11-24T12:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.497175 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.497251 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.497265 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.497289 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.497304 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:36Z","lastTransitionTime":"2025-11-24T12:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.600073 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.600125 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.600135 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.600151 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.600163 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:36Z","lastTransitionTime":"2025-11-24T12:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.703697 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.703776 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.703802 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.703838 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.703866 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:36Z","lastTransitionTime":"2025-11-24T12:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.807831 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.807898 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.807910 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.807928 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.807940 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:36Z","lastTransitionTime":"2025-11-24T12:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.911397 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.911467 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.911490 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.911530 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.911592 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:36Z","lastTransitionTime":"2025-11-24T12:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.914161 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:00:36 crc kubenswrapper[4930]: E1124 12:00:36.914432 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.914393511 +0000 UTC m=+147.528721491 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.914606 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:36 crc kubenswrapper[4930]: I1124 12:00:36.914659 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:36 crc kubenswrapper[4930]: E1124 12:00:36.914801 4930 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:00:36 crc kubenswrapper[4930]: E1124 12:00:36.914896 4930 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:00:36 crc kubenswrapper[4930]: E1124 12:00:36.914922 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.914892495 +0000 UTC m=+147.529220615 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:00:36 crc kubenswrapper[4930]: E1124 12:00:36.914950 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.914937446 +0000 UTC m=+147.529265396 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.014488 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.014550 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.014562 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.014578 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.014588 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:37Z","lastTransitionTime":"2025-11-24T12:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.015352 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.015439 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.015569 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.015590 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.015601 4930 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.015654 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.015637408 +0000 UTC m=+147.629965358 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.015678 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.015700 4930 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.015712 4930 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.015764 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.015750711 +0000 UTC m=+147.630078651 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.084432 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.084433 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.084634 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.084724 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.084478 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:37 crc kubenswrapper[4930]: E1124 12:00:37.084795 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.118426 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.118475 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.118483 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.118500 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.118510 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:37Z","lastTransitionTime":"2025-11-24T12:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.221946 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.222020 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.222039 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.222070 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.222088 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:37Z","lastTransitionTime":"2025-11-24T12:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.325569 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.325625 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.325638 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.325655 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.325666 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:37Z","lastTransitionTime":"2025-11-24T12:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.429144 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.429215 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.429233 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.429260 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.429277 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:37Z","lastTransitionTime":"2025-11-24T12:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.533945 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.534049 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.534079 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.534113 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.534134 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:37Z","lastTransitionTime":"2025-11-24T12:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.637494 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.637579 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.637592 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.637615 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.637630 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:37Z","lastTransitionTime":"2025-11-24T12:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.746771 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.746858 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.746874 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.746925 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.746939 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:37Z","lastTransitionTime":"2025-11-24T12:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.851423 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.851496 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.851515 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.851573 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.851597 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:37Z","lastTransitionTime":"2025-11-24T12:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.955567 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.955617 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.955631 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.955648 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:37 crc kubenswrapper[4930]: I1124 12:00:37.955661 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:37Z","lastTransitionTime":"2025-11-24T12:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.058441 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.058518 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.058535 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.058619 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.058645 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:38Z","lastTransitionTime":"2025-11-24T12:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.085621 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:38 crc kubenswrapper[4930]: E1124 12:00:38.086069 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.161101 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.161140 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.161150 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.161164 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.161175 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:38Z","lastTransitionTime":"2025-11-24T12:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.265489 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.265594 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.265615 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.265666 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.265687 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:38Z","lastTransitionTime":"2025-11-24T12:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.369118 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.369164 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.369174 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.369190 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.369202 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:38Z","lastTransitionTime":"2025-11-24T12:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.472389 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.472435 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.472447 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.472464 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.472477 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:38Z","lastTransitionTime":"2025-11-24T12:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.574692 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.574754 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.574771 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.574794 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.574813 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:38Z","lastTransitionTime":"2025-11-24T12:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.678223 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.679276 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.679384 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.679577 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.679697 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:38Z","lastTransitionTime":"2025-11-24T12:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.783105 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.783180 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.783210 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.783249 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.783269 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:38Z","lastTransitionTime":"2025-11-24T12:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.886422 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.886466 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.886479 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.886508 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.886523 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:38Z","lastTransitionTime":"2025-11-24T12:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.989465 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.989532 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.989586 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.989616 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:38 crc kubenswrapper[4930]: I1124 12:00:38.989636 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:38Z","lastTransitionTime":"2025-11-24T12:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.084006 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.084116 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:39 crc kubenswrapper[4930]: E1124 12:00:39.084211 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:39 crc kubenswrapper[4930]: E1124 12:00:39.084301 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.084113 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:39 crc kubenswrapper[4930]: E1124 12:00:39.084417 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.093082 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.093134 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.093145 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.093176 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.093190 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:39Z","lastTransitionTime":"2025-11-24T12:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.196408 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.196485 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.196514 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.196576 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.196602 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:39Z","lastTransitionTime":"2025-11-24T12:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.299698 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.299788 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.299814 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.299858 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.299885 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:39Z","lastTransitionTime":"2025-11-24T12:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.403469 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.403941 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.404127 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.404292 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.404432 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:39Z","lastTransitionTime":"2025-11-24T12:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.507942 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.508018 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.508037 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.508068 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.508084 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:39Z","lastTransitionTime":"2025-11-24T12:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.610030 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.610088 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.610103 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.610125 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.610138 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:39Z","lastTransitionTime":"2025-11-24T12:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.713116 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.713161 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.713173 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.713189 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.713200 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:39Z","lastTransitionTime":"2025-11-24T12:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.815655 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.815921 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.815930 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.815946 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.815956 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:39Z","lastTransitionTime":"2025-11-24T12:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.918720 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.918797 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.918815 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.918841 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:39 crc kubenswrapper[4930]: I1124 12:00:39.918862 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:39Z","lastTransitionTime":"2025-11-24T12:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.021389 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.021717 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.021770 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.021800 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.021822 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.084044 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:40 crc kubenswrapper[4930]: E1124 12:00:40.084251 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.124220 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.124513 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.124655 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.124782 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.124902 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.227698 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.227758 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.227771 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.227791 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.227803 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.281423 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.281504 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.281525 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.281610 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.281678 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: E1124 12:00:40.297119 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:40Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.302652 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.302718 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.302737 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.302762 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.302779 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: E1124 12:00:40.320077 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:40Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.325757 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.325841 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.325864 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.325895 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.325914 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: E1124 12:00:40.341491 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:40Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.347601 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.347658 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.347672 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.347696 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.347714 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: E1124 12:00:40.362218 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:40Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.366829 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.366876 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.366922 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.366947 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.366963 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: E1124 12:00:40.383672 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:40Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:40 crc kubenswrapper[4930]: E1124 12:00:40.383913 4930 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.386004 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.386065 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.386091 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.386123 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.386149 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.488738 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.488768 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.488777 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.488791 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.488801 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.591443 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.591787 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.591885 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.591970 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.592064 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.695653 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.695722 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.695741 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.695770 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.695793 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.798468 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.798533 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.798589 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.798619 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.798640 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.903028 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.903447 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.903607 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.903773 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:40 crc kubenswrapper[4930]: I1124 12:00:40.903921 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:40Z","lastTransitionTime":"2025-11-24T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.008527 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.008654 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.008683 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.008724 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.008750 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:41Z","lastTransitionTime":"2025-11-24T12:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.084475 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.084487 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:41 crc kubenswrapper[4930]: E1124 12:00:41.085267 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.084512 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:41 crc kubenswrapper[4930]: E1124 12:00:41.085460 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:41 crc kubenswrapper[4930]: E1124 12:00:41.085734 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.112316 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.112373 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.112390 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.112412 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.112433 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:41Z","lastTransitionTime":"2025-11-24T12:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.215501 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.215589 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.215601 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.215618 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.215633 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:41Z","lastTransitionTime":"2025-11-24T12:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.318110 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.318153 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.318169 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.318189 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.318204 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:41Z","lastTransitionTime":"2025-11-24T12:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.420831 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.420878 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.420889 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.420906 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.420920 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:41Z","lastTransitionTime":"2025-11-24T12:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.523273 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.523308 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.523316 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.523329 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.523337 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:41Z","lastTransitionTime":"2025-11-24T12:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.625420 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.625472 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.625484 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.625501 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.625514 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:41Z","lastTransitionTime":"2025-11-24T12:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.727628 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.727665 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.727676 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.727693 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.727705 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:41Z","lastTransitionTime":"2025-11-24T12:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.830226 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.830629 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.830689 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.830721 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.830746 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:41Z","lastTransitionTime":"2025-11-24T12:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.933821 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.933898 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.933922 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.933952 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:41 crc kubenswrapper[4930]: I1124 12:00:41.933974 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:41Z","lastTransitionTime":"2025-11-24T12:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.036179 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.036260 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.036274 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.036315 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.036327 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:42Z","lastTransitionTime":"2025-11-24T12:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.083840 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:42 crc kubenswrapper[4930]: E1124 12:00:42.084036 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.085404 4930 scope.go:117] "RemoveContainer" containerID="56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051" Nov 24 12:00:42 crc kubenswrapper[4930]: E1124 12:00:42.085670 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.139122 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.139453 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.139638 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.139794 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.139912 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:42Z","lastTransitionTime":"2025-11-24T12:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.242867 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.242919 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.242930 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.242947 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.242958 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:42Z","lastTransitionTime":"2025-11-24T12:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.345641 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.345812 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.345827 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.345845 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.345858 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:42Z","lastTransitionTime":"2025-11-24T12:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.448321 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.448363 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.448372 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.448384 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.448394 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:42Z","lastTransitionTime":"2025-11-24T12:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.550978 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.551273 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.551366 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.551504 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.551669 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:42Z","lastTransitionTime":"2025-11-24T12:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.654471 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.654815 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.654923 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.655003 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.655069 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:42Z","lastTransitionTime":"2025-11-24T12:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.757645 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.757680 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.757688 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.757700 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.757711 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:42Z","lastTransitionTime":"2025-11-24T12:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.860500 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.860573 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.860590 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.860616 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.860633 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:42Z","lastTransitionTime":"2025-11-24T12:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.963603 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.963682 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.963707 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.963738 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:42 crc kubenswrapper[4930]: I1124 12:00:42.963760 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:42Z","lastTransitionTime":"2025-11-24T12:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.067036 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.067096 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.067113 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.067129 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.067143 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:43Z","lastTransitionTime":"2025-11-24T12:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.083596 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.083607 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.083762 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:43 crc kubenswrapper[4930]: E1124 12:00:43.084008 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:43 crc kubenswrapper[4930]: E1124 12:00:43.084159 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:43 crc kubenswrapper[4930]: E1124 12:00:43.084495 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.169724 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.169804 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.169832 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.169867 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.169906 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:43Z","lastTransitionTime":"2025-11-24T12:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.272420 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.272467 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.272478 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.272495 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.272507 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:43Z","lastTransitionTime":"2025-11-24T12:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.375445 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.375873 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.376283 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.376998 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.377282 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:43Z","lastTransitionTime":"2025-11-24T12:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.479891 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.479954 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.479971 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.479995 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.480018 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:43Z","lastTransitionTime":"2025-11-24T12:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.581719 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.581755 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.581766 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.581782 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.581793 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:43Z","lastTransitionTime":"2025-11-24T12:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.684175 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.684245 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.684262 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.684277 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.684287 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:43Z","lastTransitionTime":"2025-11-24T12:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.786485 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.786526 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.786561 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.786579 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.786589 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:43Z","lastTransitionTime":"2025-11-24T12:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.890015 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.890087 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.890106 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.890133 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.890150 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:43Z","lastTransitionTime":"2025-11-24T12:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.992887 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.992935 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.992947 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.992965 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:43 crc kubenswrapper[4930]: I1124 12:00:43.992976 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:43Z","lastTransitionTime":"2025-11-24T12:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.084707 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:44 crc kubenswrapper[4930]: E1124 12:00:44.084917 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.095200 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.095239 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.095249 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.095263 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.095272 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:44Z","lastTransitionTime":"2025-11-24T12:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.099169 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.111425 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.131186 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.146339 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"2025-11-24T11:59:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813\\\\n2025-11-24T11:59:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813 to /host/opt/cni/bin/\\\\n2025-11-24T11:59:36Z [verbose] multus-daemon started\\\\n2025-11-24T11:59:36Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:00:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.182157 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:27Z\\\",\\\"message\\\":\\\"red: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:00:27.006921 6972 obj_retry.go:409] Going to retry *v1.Pod resource setup for 13 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-additional-cni-plugins-c8rb7 openshift-multus/network-metrics-daemon-r4jtv openshift-network-operator/iptables-alerter-4ln5h openshift-dns/node-resolver-gfn4n openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk openshift-image-registry/node-ca-8mhdf openshift-multus/multus-5lvxv openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1124 12:00:27.007187 6972 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI1124 12:00:27.007200 6972 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.201287 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.201333 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.201793 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.201829 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.201884 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:44Z","lastTransitionTime":"2025-11-24T12:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.203000 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.216408 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.232490 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74710830-f474-458e-b871-0b7be860bdad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.244736 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.259224 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.271059 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.288639 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.305038 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.305110 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.305122 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.305138 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.305151 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:44Z","lastTransitionTime":"2025-11-24T12:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.305472 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.318218 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.330370 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.342302 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d733227-cfa3-4bc6-b6e6-9901b4574412\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edfe006f31272340fa98b4821ee0dce6d60014bbfc82c2d9d3eb94ba793804b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65749fc086891c98be90e9567512181dbec456e46a0a1ee4757ea96a8baad5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65749fc086891c98be90e9567512181dbec456e46a0a1ee4757ea96a8baad5f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.359256 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.373989 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.408500 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.408580 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.408595 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.408614 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.408625 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:44Z","lastTransitionTime":"2025-11-24T12:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.511164 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.511196 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.511206 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.511223 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.511233 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:44Z","lastTransitionTime":"2025-11-24T12:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.613098 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.613299 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.613357 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.613415 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.613477 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:44Z","lastTransitionTime":"2025-11-24T12:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.715755 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.715803 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.715817 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.715842 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.715857 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:44Z","lastTransitionTime":"2025-11-24T12:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.818264 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.818489 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.818645 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.818780 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.818887 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:44Z","lastTransitionTime":"2025-11-24T12:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.921712 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.921804 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.921815 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.921830 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:44 crc kubenswrapper[4930]: I1124 12:00:44.921860 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:44Z","lastTransitionTime":"2025-11-24T12:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.024181 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.024457 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.024518 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.024600 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.024656 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:45Z","lastTransitionTime":"2025-11-24T12:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.083835 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:45 crc kubenswrapper[4930]: E1124 12:00:45.084117 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.083890 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:45 crc kubenswrapper[4930]: E1124 12:00:45.084299 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.083848 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:45 crc kubenswrapper[4930]: E1124 12:00:45.084587 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.126534 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.126585 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.126596 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.126610 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.126619 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:45Z","lastTransitionTime":"2025-11-24T12:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.228959 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.229023 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.229042 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.229066 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.229084 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:45Z","lastTransitionTime":"2025-11-24T12:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.332060 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.332476 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.332727 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.332945 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.333146 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:45Z","lastTransitionTime":"2025-11-24T12:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.437244 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.437314 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.437341 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.437373 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.437395 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:45Z","lastTransitionTime":"2025-11-24T12:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.540334 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.540420 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.540443 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.540473 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.540495 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:45Z","lastTransitionTime":"2025-11-24T12:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.643884 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.643957 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.643980 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.644007 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.644028 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:45Z","lastTransitionTime":"2025-11-24T12:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.747187 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.747428 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.747512 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.747690 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.747828 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:45Z","lastTransitionTime":"2025-11-24T12:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.872150 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.872185 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.872196 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.872211 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.872221 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:45Z","lastTransitionTime":"2025-11-24T12:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.973899 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.974212 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.974332 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.974413 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:45 crc kubenswrapper[4930]: I1124 12:00:45.974502 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:45Z","lastTransitionTime":"2025-11-24T12:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.076608 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.076642 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.076650 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.076662 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.076671 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:46Z","lastTransitionTime":"2025-11-24T12:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.083865 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:46 crc kubenswrapper[4930]: E1124 12:00:46.084064 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.178521 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.178580 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.178592 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.178610 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.178623 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:46Z","lastTransitionTime":"2025-11-24T12:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.282025 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.282359 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.282456 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.282528 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.282644 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:46Z","lastTransitionTime":"2025-11-24T12:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.385331 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.385623 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.385714 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.385804 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.385887 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:46Z","lastTransitionTime":"2025-11-24T12:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.488331 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.488366 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.488377 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.488393 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.488404 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:46Z","lastTransitionTime":"2025-11-24T12:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.590777 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.590829 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.590845 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.590869 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.590888 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:46Z","lastTransitionTime":"2025-11-24T12:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.693373 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.693442 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.693466 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.693499 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.693524 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:46Z","lastTransitionTime":"2025-11-24T12:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.795788 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.795822 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.795830 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.795842 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.795850 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:46Z","lastTransitionTime":"2025-11-24T12:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.898279 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.898341 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.898362 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.898392 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:46 crc kubenswrapper[4930]: I1124 12:00:46.898413 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:46Z","lastTransitionTime":"2025-11-24T12:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.002652 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.002707 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.002730 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.002759 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.002784 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:47Z","lastTransitionTime":"2025-11-24T12:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.084412 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.084439 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.084447 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:47 crc kubenswrapper[4930]: E1124 12:00:47.084588 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:47 crc kubenswrapper[4930]: E1124 12:00:47.084743 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:47 crc kubenswrapper[4930]: E1124 12:00:47.084804 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.105252 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.105316 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.105340 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.105371 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.105394 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:47Z","lastTransitionTime":"2025-11-24T12:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.208272 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.208980 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.209082 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.209177 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.209265 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:47Z","lastTransitionTime":"2025-11-24T12:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.311771 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.311828 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.311843 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.311864 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.311876 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:47Z","lastTransitionTime":"2025-11-24T12:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.414862 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.415071 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.415159 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.415226 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.415319 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:47Z","lastTransitionTime":"2025-11-24T12:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.517338 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.517778 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.517997 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.518130 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.518258 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:47Z","lastTransitionTime":"2025-11-24T12:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.620345 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.620764 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.620944 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.621118 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.621296 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:47Z","lastTransitionTime":"2025-11-24T12:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.724313 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.724353 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.724362 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.724376 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.724385 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:47Z","lastTransitionTime":"2025-11-24T12:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.826878 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.827226 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.827314 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.827405 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.827502 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:47Z","lastTransitionTime":"2025-11-24T12:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.929354 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.929387 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.929398 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.929413 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:47 crc kubenswrapper[4930]: I1124 12:00:47.929424 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:47Z","lastTransitionTime":"2025-11-24T12:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.032255 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.032512 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.032612 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.032783 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.032870 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:48Z","lastTransitionTime":"2025-11-24T12:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.084008 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:48 crc kubenswrapper[4930]: E1124 12:00:48.084450 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.135523 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.135677 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.135706 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.135735 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.135756 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:48Z","lastTransitionTime":"2025-11-24T12:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.238080 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.238954 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.239199 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.239426 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.239653 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:48Z","lastTransitionTime":"2025-11-24T12:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.342525 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.342586 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.342598 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.342616 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.342628 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:48Z","lastTransitionTime":"2025-11-24T12:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.445230 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.445267 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.445276 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.445289 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.445300 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:48Z","lastTransitionTime":"2025-11-24T12:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.547466 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.547495 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.547503 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.547515 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.547524 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:48Z","lastTransitionTime":"2025-11-24T12:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.649772 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.649836 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.649853 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.649875 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.649892 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:48Z","lastTransitionTime":"2025-11-24T12:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.753055 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.753149 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.753172 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.753202 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.753223 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:48Z","lastTransitionTime":"2025-11-24T12:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.857127 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.857202 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.857237 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.857273 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.857294 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:48Z","lastTransitionTime":"2025-11-24T12:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.960922 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.961006 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.961027 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.961060 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:48 crc kubenswrapper[4930]: I1124 12:00:48.961082 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:48Z","lastTransitionTime":"2025-11-24T12:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.064777 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.064827 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.064839 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.064862 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.064880 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:49Z","lastTransitionTime":"2025-11-24T12:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.084198 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.084198 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:49 crc kubenswrapper[4930]: E1124 12:00:49.084585 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:49 crc kubenswrapper[4930]: E1124 12:00:49.084684 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.084265 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:49 crc kubenswrapper[4930]: E1124 12:00:49.084801 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.167788 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.167827 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.167838 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.167855 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.167866 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:49Z","lastTransitionTime":"2025-11-24T12:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.271119 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.271177 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.271196 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.271222 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.271242 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:49Z","lastTransitionTime":"2025-11-24T12:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.373686 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.373737 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.373746 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.373760 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.373769 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:49Z","lastTransitionTime":"2025-11-24T12:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.476667 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.476701 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.476711 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.476724 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.476732 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:49Z","lastTransitionTime":"2025-11-24T12:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.579297 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.579350 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.579362 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.579380 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.579393 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:49Z","lastTransitionTime":"2025-11-24T12:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.681920 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.681994 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.682011 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.682037 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.682057 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:49Z","lastTransitionTime":"2025-11-24T12:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.784911 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.784942 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.784950 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.784963 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.784971 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:49Z","lastTransitionTime":"2025-11-24T12:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.887413 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.887471 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.887489 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.887511 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.887526 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:49Z","lastTransitionTime":"2025-11-24T12:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.991103 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.991170 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.991193 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.991221 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:49 crc kubenswrapper[4930]: I1124 12:00:49.991241 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:49Z","lastTransitionTime":"2025-11-24T12:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.084112 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:50 crc kubenswrapper[4930]: E1124 12:00:50.084357 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.093799 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.093830 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.093837 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.093848 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.093856 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.197271 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.197341 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.197366 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.197393 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.197415 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.300488 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.300526 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.300559 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.300575 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.300584 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.403454 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.403509 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.403525 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.403564 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.403582 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.428518 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.428571 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.428584 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.428598 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.428609 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: E1124 12:00:50.442239 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:50Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.446311 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.446359 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.446370 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.446388 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.446402 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: E1124 12:00:50.466804 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:50Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.470129 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.470207 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.470233 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.470266 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.470289 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: E1124 12:00:50.491258 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:50Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.497335 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.497436 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.497493 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.497520 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.497583 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: E1124 12:00:50.514206 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:50Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.519301 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.519391 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.519415 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.519476 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.519504 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: E1124 12:00:50.536314 4930 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26e464ae-360f-4bd3-8823-d8644163564e\\\",\\\"systemUUID\\\":\\\"7e3330cf-3d22-4119-8ec8-af730100ba56\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:50Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:50 crc kubenswrapper[4930]: E1124 12:00:50.536516 4930 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.541427 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.541493 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.541513 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.541572 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.541597 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.643859 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.643910 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.643918 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.643933 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.643943 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.746392 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.746430 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.746440 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.746452 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.746462 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.849354 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.849413 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.849432 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.849456 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.849475 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.951773 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.951830 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.951846 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.951868 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:50 crc kubenswrapper[4930]: I1124 12:00:50.951885 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:50Z","lastTransitionTime":"2025-11-24T12:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.054268 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.054313 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.054324 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.054340 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.054351 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:51Z","lastTransitionTime":"2025-11-24T12:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.083951 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.084019 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:51 crc kubenswrapper[4930]: E1124 12:00:51.084055 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.083952 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:51 crc kubenswrapper[4930]: E1124 12:00:51.084129 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:51 crc kubenswrapper[4930]: E1124 12:00:51.084187 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.157756 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.157837 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.157872 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.157894 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.157906 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:51Z","lastTransitionTime":"2025-11-24T12:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.260597 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.260673 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.260696 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.260724 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.260744 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:51Z","lastTransitionTime":"2025-11-24T12:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.363094 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.363174 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.363192 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.363221 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.363240 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:51Z","lastTransitionTime":"2025-11-24T12:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.468235 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.468393 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.468417 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.468441 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.468459 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:51Z","lastTransitionTime":"2025-11-24T12:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.570866 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.570935 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.570946 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.570963 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.570975 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:51Z","lastTransitionTime":"2025-11-24T12:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.673415 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.673451 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.673459 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.673473 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.673482 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:51Z","lastTransitionTime":"2025-11-24T12:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.775898 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.775966 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.775985 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.776011 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.776032 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:51Z","lastTransitionTime":"2025-11-24T12:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.879113 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.879202 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.879221 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.879251 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.879280 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:51Z","lastTransitionTime":"2025-11-24T12:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.982063 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.982119 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.982131 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.982151 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:51 crc kubenswrapper[4930]: I1124 12:00:51.982164 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:51Z","lastTransitionTime":"2025-11-24T12:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.084171 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:52 crc kubenswrapper[4930]: E1124 12:00:52.084467 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.084564 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:52 crc kubenswrapper[4930]: E1124 12:00:52.084833 4930 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:00:52 crc kubenswrapper[4930]: E1124 12:00:52.084968 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs podName:96ced043-6cad-4f17-8648-624f36bf14f1 nodeName:}" failed. No retries permitted until 2025-11-24 12:01:56.084933565 +0000 UTC m=+162.699261545 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs") pod "network-metrics-daemon-r4jtv" (UID: "96ced043-6cad-4f17-8648-624f36bf14f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.085671 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.085719 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.085739 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.085764 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.085785 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:52Z","lastTransitionTime":"2025-11-24T12:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.188957 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.189067 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.189091 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.189121 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.189145 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:52Z","lastTransitionTime":"2025-11-24T12:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.292425 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.292493 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.292507 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.292529 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.292565 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:52Z","lastTransitionTime":"2025-11-24T12:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.396310 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.396381 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.396400 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.396433 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.396456 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:52Z","lastTransitionTime":"2025-11-24T12:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.499936 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.500000 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.500017 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.500046 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.500064 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:52Z","lastTransitionTime":"2025-11-24T12:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.603605 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.603663 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.603675 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.603692 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.603702 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:52Z","lastTransitionTime":"2025-11-24T12:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.706764 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.706835 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.706861 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.706898 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.706930 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:52Z","lastTransitionTime":"2025-11-24T12:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.810175 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.810225 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.810241 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.810261 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.810274 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:52Z","lastTransitionTime":"2025-11-24T12:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.913073 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.913166 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.913185 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.913214 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:52 crc kubenswrapper[4930]: I1124 12:00:52.913233 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:52Z","lastTransitionTime":"2025-11-24T12:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.016261 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.016341 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.016369 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.016406 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.016435 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:53Z","lastTransitionTime":"2025-11-24T12:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.083898 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.083972 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.084017 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:53 crc kubenswrapper[4930]: E1124 12:00:53.084222 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:53 crc kubenswrapper[4930]: E1124 12:00:53.084447 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:53 crc kubenswrapper[4930]: E1124 12:00:53.084744 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.123358 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.123432 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.123455 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.123487 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.123508 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:53Z","lastTransitionTime":"2025-11-24T12:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.227008 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.227137 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.227154 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.227179 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.227195 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:53Z","lastTransitionTime":"2025-11-24T12:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.330060 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.330125 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.330140 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.330166 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.330183 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:53Z","lastTransitionTime":"2025-11-24T12:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.433610 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.433672 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.433690 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.433713 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.433728 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:53Z","lastTransitionTime":"2025-11-24T12:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.537060 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.537127 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.537146 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.537173 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.537191 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:53Z","lastTransitionTime":"2025-11-24T12:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.640376 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.640448 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.640467 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.640497 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.640517 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:53Z","lastTransitionTime":"2025-11-24T12:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.743265 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.743312 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.743325 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.743346 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.743360 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:53Z","lastTransitionTime":"2025-11-24T12:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.845874 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.845938 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.845955 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.845981 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.846002 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:53Z","lastTransitionTime":"2025-11-24T12:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.948621 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.948657 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.948668 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.948684 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:53 crc kubenswrapper[4930]: I1124 12:00:53.948695 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:53Z","lastTransitionTime":"2025-11-24T12:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.050606 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.050653 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.050662 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.050679 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.050691 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:54Z","lastTransitionTime":"2025-11-24T12:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.084132 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:54 crc kubenswrapper[4930]: E1124 12:00:54.084295 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.103130 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bbdd5fadcc56167186eb9a67485e4cd3a7cda867056e3cd3d745f45f06ceb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.118385 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8835064f-65c7-48cb-8b7d-330e5cce840a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3147ac5175a33aefea9132fb6f61055923e0c9aa9f3b3ee1f4a7b5fb4c4b54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qgn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kjhcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.132813 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00dacb346f4c4b671ab62bc71c0fad965f682c8bb41453d6ab8367a6d1420262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.147411 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36deb7de93f360446d2b0184b50770b72ada5d4a4b9559ef12dffd97cb130564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a78b34add6cb030a2a25fe5dbe0e9cde6347fc3ae8ee0d29ad7746716a83a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.152960 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.153005 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.153014 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.153034 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.153045 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:54Z","lastTransitionTime":"2025-11-24T12:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.162715 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5lvxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c34ffc-f1cd-4828-b83c-22bd0c02f364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:21Z\\\",\\\"message\\\":\\\"2025-11-24T11:59:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813\\\\n2025-11-24T11:59:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_19464f25-ca37-4ab6-8382-970f5ad08813 to /host/opt/cni/bin/\\\\n2025-11-24T11:59:36Z [verbose] multus-daemon started\\\\n2025-11-24T11:59:36Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:00:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m687j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5lvxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.194306 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3159aca-5e15-4f2c-ae74-e547f4a227f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:00:27Z\\\",\\\"message\\\":\\\"red: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:00:27.006921 6972 obj_retry.go:409] Going to retry *v1.Pod resource setup for 13 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-additional-cni-plugins-c8rb7 openshift-multus/network-metrics-daemon-r4jtv openshift-network-operator/iptables-alerter-4ln5h openshift-dns/node-resolver-gfn4n openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk openshift-image-registry/node-ca-8mhdf openshift-multus/multus-5lvxv openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1124 12:00:27.007187 6972 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI1124 12:00:27.007200 6972 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9gj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b6q2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.217673 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aee5f87e-22f1-4e8c-8f14-3d792f4d9a08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2853609b673015259d6e31e65e1d58cab042968205f3716cba47e8a4c426fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35b7a8c157f41a4dd33ede4204ff4d8a80f45da3e3d6d05b984272e4cf6d6582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dfc92916b9da972eb9b80761846bff05e9151ea98ca97ab35b4f0e5924e5a23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d77e0d45ce8fd837326c7099c5a983d688e6b41d4e6206426406f63f15ae51d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a15dfe444a7e4197aed686d818fa1001c8424a051ae136c0c8b2f6d3b0d79a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03a3b49194748a7f223a4f56b4189cdf125c032f1b98fa7a9af6e2bfcf4db066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f12cf5faef3751bde5e36de67b618324d449bca8292a086f2ab09165a91623e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmtq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c8rb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.232678 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ced043-6cad-4f17-8648-624f36bf14f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fg4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4jtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.248639 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74710830-f474-458e-b871-0b7be860bdad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae67af2476e24057506bc0b9522c36edc01c9ed0cbdf28e8a507b3145ca8a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7911dcd8010054bda50955cacdf50d03f6f33976ce46922e810d8a903ca3f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d736c52fd83cddaf5cb6a8640251e73f0b8fd7f66a5190ae29aecebd4b6c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25df652c8a33527ee815c98e7c4bd481267b212537e12b3b995238a6e626a307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.255816 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.255866 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.255874 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.255892 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.255904 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:54Z","lastTransitionTime":"2025-11-24T12:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.264171 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.279067 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.292809 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e677c41-2d4e-47b3-840b-cd43f1c5ed34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aaf2e79eef323b69f0ba409ac0197b7681b4a5672220aff68adb3df438f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68cdf2050ad1fef444882483479c1be2efb21f2fe5610b8118865f819048dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hk22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.307323 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"881decff-a67a-4a91-87d3-15227d107507\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0313394d52656bfef5db8caa93356b9fe7d769c4a4de35216906765286a8bd24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16cacc71b02c530aa45f13aeac3f91a20322c9719995759dfdeff0c665e86d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7774518f0b26deabf55e6b73ef01ea69e5fd14e2401715087f4b17e92ab68fc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae376a6798ac57fc01bd50c79c53292c7a8a9d9e4ed7550020072b35ffbed70f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.319408 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.330645 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gfn4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b976b8fb-925e-4ceb-bba5-de69b9bbb46b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eac8e4fcafec3bf42a40389bb49d8ece47f27cecf7304e91d91122e1a5d512d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45d6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gfn4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.344441 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8mhdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef6ac963-b7db-4c43-891c-d8eb105e566a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9930db4feddeaee91078caa94b0dc11277901f7713ff8f6fb5f2e2bf1d937e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8mhdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.358925 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.358968 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.358979 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.359004 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.359022 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:54Z","lastTransitionTime":"2025-11-24T12:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.361870 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d733227-cfa3-4bc6-b6e6-9901b4574412\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edfe006f31272340fa98b4821ee0dce6d60014bbfc82c2d9d3eb94ba793804b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65749fc086891c98be90e9567512181dbec456e46a0a1ee4757ea96a8baad5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65749fc086891c98be90e9567512181dbec456e46a0a1ee4757ea96a8baad5f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.379030 4930 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f43f2944-b23a-4356-98dd-f51aa3b164d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:59:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83baa26897268f6942431d0d5c37b34b1c3698a779972ff250721327965af724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df06274a962d6c625088956f201eb2950896221aec482e91b090c73c408bdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5cca0c21a4bdeeb3d773cfac823e1073ad35037c421fa8b1132316a39aa7b3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2313b37d6755d2b48447f2d2134184d37dc639e91370c112e60b05d009ad25d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf210a9782fd7c2db1cf27c54486c846168f153ec2b9908103c19e14f9d23282\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:59:32Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:59:17.490773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:59:17.495012 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2507191741/tls.crt::/tmp/serving-cert-2507191741/tls.key\\\\\\\"\\\\nI1124 11:59:32.416588 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:59:32.421010 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:59:32.421085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:59:32.421134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:59:32.421160 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:59:32.427650 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1124 11:59:32.427678 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1124 11:59:32.427689 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:59:32.427703 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:59:32.427706 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:59:32.427711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:59:32.427714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 11:59:32.430856 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c892b52fd8d7e68e270de6f46287ae57595e20b48c838ee646ec3a4a94f556d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:59:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd4b87b8b52134eb7908d3da79a63d82b259b707e8fe382089d14bbc5eca712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:59:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:59:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:00:54Z is after 2025-08-24T17:21:41Z" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.462840 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.462912 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.462926 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.462950 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.462968 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:54Z","lastTransitionTime":"2025-11-24T12:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.572969 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.573058 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.573072 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.573091 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.573109 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:54Z","lastTransitionTime":"2025-11-24T12:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.675674 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.676131 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.676210 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.676315 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.676412 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:54Z","lastTransitionTime":"2025-11-24T12:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.779900 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.779939 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.779950 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.779968 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.779979 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:54Z","lastTransitionTime":"2025-11-24T12:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.883849 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.883919 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.883937 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.883966 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.883985 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:54Z","lastTransitionTime":"2025-11-24T12:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.986133 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.986199 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.986217 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.986246 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:54 crc kubenswrapper[4930]: I1124 12:00:54.986265 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:54Z","lastTransitionTime":"2025-11-24T12:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.083844 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.083935 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.084177 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:55 crc kubenswrapper[4930]: E1124 12:00:55.084363 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:55 crc kubenswrapper[4930]: E1124 12:00:55.084912 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:55 crc kubenswrapper[4930]: E1124 12:00:55.085003 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.088491 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.088584 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.088609 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.088639 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.088664 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:55Z","lastTransitionTime":"2025-11-24T12:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.102994 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.190806 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.190843 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.190855 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.190873 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.190886 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:55Z","lastTransitionTime":"2025-11-24T12:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.293599 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.293623 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.293632 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.293644 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.293653 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:55Z","lastTransitionTime":"2025-11-24T12:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.396520 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.396598 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.396613 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.396638 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.396655 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:55Z","lastTransitionTime":"2025-11-24T12:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.499891 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.499957 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.499974 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.500005 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.500023 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:55Z","lastTransitionTime":"2025-11-24T12:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.604331 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.604385 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.604396 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.604420 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.604435 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:55Z","lastTransitionTime":"2025-11-24T12:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.707844 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.707971 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.708000 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.708035 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.708056 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:55Z","lastTransitionTime":"2025-11-24T12:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.810984 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.811059 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.811082 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.811112 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.811135 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:55Z","lastTransitionTime":"2025-11-24T12:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.915032 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.915085 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.915102 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.915122 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:55 crc kubenswrapper[4930]: I1124 12:00:55.915134 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:55Z","lastTransitionTime":"2025-11-24T12:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.018067 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.018148 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.018172 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.018204 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.018228 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:56Z","lastTransitionTime":"2025-11-24T12:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.083959 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:56 crc kubenswrapper[4930]: E1124 12:00:56.084196 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.120405 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.120447 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.120458 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.120474 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.120485 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:56Z","lastTransitionTime":"2025-11-24T12:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.223232 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.223300 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.223316 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.223339 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.223353 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:56Z","lastTransitionTime":"2025-11-24T12:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.325982 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.326026 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.326038 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.326057 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.326074 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:56Z","lastTransitionTime":"2025-11-24T12:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.431217 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.431275 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.431287 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.431305 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.431317 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:56Z","lastTransitionTime":"2025-11-24T12:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.537701 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.537754 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.537768 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.537790 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.537810 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:56Z","lastTransitionTime":"2025-11-24T12:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.641079 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.641152 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.641175 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.641470 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.641487 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:56Z","lastTransitionTime":"2025-11-24T12:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.745056 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.745108 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.745149 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.745172 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.745187 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:56Z","lastTransitionTime":"2025-11-24T12:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.847833 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.847871 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.847883 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.847897 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.847909 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:56Z","lastTransitionTime":"2025-11-24T12:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.950982 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.951044 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.951059 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.951083 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:56 crc kubenswrapper[4930]: I1124 12:00:56.951100 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:56Z","lastTransitionTime":"2025-11-24T12:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.052927 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.052981 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.052994 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.053010 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.053022 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:57Z","lastTransitionTime":"2025-11-24T12:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.084219 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.084330 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.084659 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:57 crc kubenswrapper[4930]: E1124 12:00:57.084779 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:57 crc kubenswrapper[4930]: E1124 12:00:57.084974 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:57 crc kubenswrapper[4930]: E1124 12:00:57.085045 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.085191 4930 scope.go:117] "RemoveContainer" containerID="56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051" Nov 24 12:00:57 crc kubenswrapper[4930]: E1124 12:00:57.085386 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b6q2v_openshift-ovn-kubernetes(b3159aca-5e15-4f2c-ae74-e547f4a227f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.156045 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.156111 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.156122 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.156142 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.156155 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:57Z","lastTransitionTime":"2025-11-24T12:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.260465 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.260589 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.260624 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.260652 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.260672 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:57Z","lastTransitionTime":"2025-11-24T12:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.363199 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.363254 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.363273 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.363295 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.363312 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:57Z","lastTransitionTime":"2025-11-24T12:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.466407 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.466465 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.466483 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.466506 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.466523 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:57Z","lastTransitionTime":"2025-11-24T12:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.569498 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.569577 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.569598 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.569618 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.569631 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:57Z","lastTransitionTime":"2025-11-24T12:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.672339 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.672431 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.672456 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.672487 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.672509 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:57Z","lastTransitionTime":"2025-11-24T12:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.776309 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.776403 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.776424 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.776450 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.776468 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:57Z","lastTransitionTime":"2025-11-24T12:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.879621 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.879707 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.879745 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.879774 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.879793 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:57Z","lastTransitionTime":"2025-11-24T12:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.982138 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.982200 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.982210 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.982225 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:57 crc kubenswrapper[4930]: I1124 12:00:57.982233 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:57Z","lastTransitionTime":"2025-11-24T12:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.083857 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:00:58 crc kubenswrapper[4930]: E1124 12:00:58.084301 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.085462 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.085499 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.085513 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.085527 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.085552 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:58Z","lastTransitionTime":"2025-11-24T12:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.187127 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.187173 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.187185 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.187201 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.187213 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:58Z","lastTransitionTime":"2025-11-24T12:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.290224 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.290286 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.290304 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.290326 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.290347 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:58Z","lastTransitionTime":"2025-11-24T12:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.393255 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.393318 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.393356 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.393393 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.393418 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:58Z","lastTransitionTime":"2025-11-24T12:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.495905 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.495937 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.495946 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.495959 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.495972 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:58Z","lastTransitionTime":"2025-11-24T12:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.598023 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.598071 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.598086 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.598106 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.598120 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:58Z","lastTransitionTime":"2025-11-24T12:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.700898 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.700934 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.700945 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.700959 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.700970 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:58Z","lastTransitionTime":"2025-11-24T12:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.803240 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.803276 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.803284 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.803296 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.803306 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:58Z","lastTransitionTime":"2025-11-24T12:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.906001 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.906056 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.906069 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.906087 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:58 crc kubenswrapper[4930]: I1124 12:00:58.906100 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:58Z","lastTransitionTime":"2025-11-24T12:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.008304 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.008350 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.008360 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.008373 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.008385 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:59Z","lastTransitionTime":"2025-11-24T12:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.083562 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.083660 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.083599 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:00:59 crc kubenswrapper[4930]: E1124 12:00:59.083799 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:00:59 crc kubenswrapper[4930]: E1124 12:00:59.083900 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:00:59 crc kubenswrapper[4930]: E1124 12:00:59.084113 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.110917 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.110974 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.110998 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.111025 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.111047 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:59Z","lastTransitionTime":"2025-11-24T12:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.214336 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.214395 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.214418 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.214448 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.214473 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:59Z","lastTransitionTime":"2025-11-24T12:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.318703 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.318789 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.318811 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.318839 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.318860 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:59Z","lastTransitionTime":"2025-11-24T12:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.421431 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.421472 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.421480 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.421495 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.421504 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:59Z","lastTransitionTime":"2025-11-24T12:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.524639 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.524694 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.524712 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.524733 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.524757 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:59Z","lastTransitionTime":"2025-11-24T12:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.627276 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.627375 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.627392 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.627409 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.627422 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:59Z","lastTransitionTime":"2025-11-24T12:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.730118 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.730169 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.730184 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.730204 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.730219 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:59Z","lastTransitionTime":"2025-11-24T12:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.833277 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.833334 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.833346 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.833365 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.833379 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:59Z","lastTransitionTime":"2025-11-24T12:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.936155 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.936199 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.936213 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.936234 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:00:59 crc kubenswrapper[4930]: I1124 12:00:59.936253 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:00:59Z","lastTransitionTime":"2025-11-24T12:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.038831 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.038874 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.038885 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.038900 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.038910 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:01:00Z","lastTransitionTime":"2025-11-24T12:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.083567 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:00 crc kubenswrapper[4930]: E1124 12:01:00.083739 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.140701 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.140736 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.140746 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.140761 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.140772 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:01:00Z","lastTransitionTime":"2025-11-24T12:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.243505 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.243557 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.243566 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.243581 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.243589 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:01:00Z","lastTransitionTime":"2025-11-24T12:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.346178 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.346239 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.346268 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.346290 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.346304 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:01:00Z","lastTransitionTime":"2025-11-24T12:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.448944 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.448994 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.449027 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.449049 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.449062 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:01:00Z","lastTransitionTime":"2025-11-24T12:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.551391 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.551437 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.551450 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.551470 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.551482 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:01:00Z","lastTransitionTime":"2025-11-24T12:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.653673 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.653747 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.653770 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.653800 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.653824 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:01:00Z","lastTransitionTime":"2025-11-24T12:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.685427 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.685469 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.685481 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.685497 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.685507 4930 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:01:00Z","lastTransitionTime":"2025-11-24T12:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.737256 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7"] Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.737652 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.740091 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.740803 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.740886 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.740963 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.773852 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.773833295 podStartE2EDuration="30.773833295s" podCreationTimestamp="2025-11-24 12:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:00.754447762 +0000 UTC m=+107.368775722" watchObservedRunningTime="2025-11-24 12:01:00.773833295 +0000 UTC m=+107.388161235" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.789417 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.789495 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.789729 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.789825 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.796894 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.795899 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.795876589 podStartE2EDuration="1m28.795876589s" podCreationTimestamp="2025-11-24 11:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:00.775650394 +0000 UTC m=+107.389978354" watchObservedRunningTime="2025-11-24 12:01:00.795876589 +0000 UTC m=+107.410204569" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.808817 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gfn4n" podStartSLOduration=87.808796106 podStartE2EDuration="1m27.808796106s" podCreationTimestamp="2025-11-24 11:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:00.808254842 +0000 UTC m=+107.422582792" watchObservedRunningTime="2025-11-24 12:01:00.808796106 +0000 UTC m=+107.423124057" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.834172 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8mhdf" podStartSLOduration=86.834151769 podStartE2EDuration="1m26.834151769s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:00.820377799 +0000 UTC m=+107.434705749" watchObservedRunningTime="2025-11-24 12:01:00.834151769 +0000 UTC m=+107.448479719" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.865476 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podStartSLOduration=87.865458013 podStartE2EDuration="1m27.865458013s" podCreationTimestamp="2025-11-24 11:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:00.865110873 +0000 UTC m=+107.479438823" watchObservedRunningTime="2025-11-24 12:01:00.865458013 +0000 UTC m=+107.479785963" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.881776 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-c8rb7" podStartSLOduration=87.881755352 podStartE2EDuration="1m27.881755352s" podCreationTimestamp="2025-11-24 11:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:00.88170421 +0000 UTC m=+107.496032160" watchObservedRunningTime="2025-11-24 12:01:00.881755352 +0000 UTC m=+107.496083302" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.898041 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.898102 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.898144 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.898185 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.898225 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.898301 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.898388 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.899137 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.904332 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.905782 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.905764448 podStartE2EDuration="56.905764448s" podCreationTimestamp="2025-11-24 12:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:00.905614144 +0000 UTC m=+107.519942104" watchObservedRunningTime="2025-11-24 12:01:00.905764448 +0000 UTC m=+107.520092398" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.916073 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bf303df-dbb6-46f1-8bf8-6b22da4d01f2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-d4bz7\" (UID: \"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.961906 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5lvxv" podStartSLOduration=87.96188866 podStartE2EDuration="1m27.96188866s" podCreationTimestamp="2025-11-24 11:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:00.961306965 +0000 UTC m=+107.575634905" watchObservedRunningTime="2025-11-24 12:01:00.96188866 +0000 UTC m=+107.576216610" Nov 24 12:01:00 crc kubenswrapper[4930]: I1124 12:01:00.996607 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.996588735 podStartE2EDuration="1m24.996588735s" podCreationTimestamp="2025-11-24 11:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:00.996566954 +0000 UTC m=+107.610894904" watchObservedRunningTime="2025-11-24 12:01:00.996588735 +0000 UTC m=+107.610916685" Nov 24 12:01:01 crc kubenswrapper[4930]: I1124 12:01:01.020319 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=6.020301624 podStartE2EDuration="6.020301624s" podCreationTimestamp="2025-11-24 12:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:01.018882025 +0000 UTC m=+107.633209995" watchObservedRunningTime="2025-11-24 12:01:01.020301624 +0000 UTC m=+107.634629574" Nov 24 12:01:01 crc kubenswrapper[4930]: I1124 12:01:01.054264 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vsnk" podStartSLOduration=86.054247608 podStartE2EDuration="1m26.054247608s" podCreationTimestamp="2025-11-24 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:01.053260281 +0000 UTC m=+107.667588231" watchObservedRunningTime="2025-11-24 12:01:01.054247608 +0000 UTC m=+107.668575558" Nov 24 12:01:01 crc kubenswrapper[4930]: I1124 12:01:01.055917 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" Nov 24 12:01:01 crc kubenswrapper[4930]: I1124 12:01:01.084269 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:01 crc kubenswrapper[4930]: E1124 12:01:01.084380 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:01 crc kubenswrapper[4930]: I1124 12:01:01.084416 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:01 crc kubenswrapper[4930]: I1124 12:01:01.084421 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:01 crc kubenswrapper[4930]: E1124 12:01:01.084459 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:01 crc kubenswrapper[4930]: E1124 12:01:01.084529 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:01 crc kubenswrapper[4930]: I1124 12:01:01.648039 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" event={"ID":"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2","Type":"ContainerStarted","Data":"fd9d12e4ce089e8855eac81bcfe53c9e1e442b04854fb2cebb903e25b0f3dd8e"} Nov 24 12:01:01 crc kubenswrapper[4930]: I1124 12:01:01.648589 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" event={"ID":"8bf303df-dbb6-46f1-8bf8-6b22da4d01f2","Type":"ContainerStarted","Data":"e9179e95ac06e18102a1d6d2e73f4c886f06b4ec5996b7c821c298aabd0b5899"} Nov 24 12:01:01 crc kubenswrapper[4930]: I1124 12:01:01.670430 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4bz7" podStartSLOduration=88.670399914 podStartE2EDuration="1m28.670399914s" podCreationTimestamp="2025-11-24 11:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:01.669770237 +0000 UTC m=+108.284098187" watchObservedRunningTime="2025-11-24 12:01:01.670399914 +0000 UTC m=+108.284727864" Nov 24 12:01:02 crc kubenswrapper[4930]: I1124 12:01:02.084104 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:02 crc kubenswrapper[4930]: E1124 12:01:02.084255 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:03 crc kubenswrapper[4930]: I1124 12:01:03.083993 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:03 crc kubenswrapper[4930]: I1124 12:01:03.084102 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:03 crc kubenswrapper[4930]: E1124 12:01:03.084204 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:03 crc kubenswrapper[4930]: I1124 12:01:03.084349 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:03 crc kubenswrapper[4930]: E1124 12:01:03.084395 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:03 crc kubenswrapper[4930]: E1124 12:01:03.084504 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:04 crc kubenswrapper[4930]: I1124 12:01:04.084203 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:04 crc kubenswrapper[4930]: E1124 12:01:04.086611 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:05 crc kubenswrapper[4930]: I1124 12:01:05.084498 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:05 crc kubenswrapper[4930]: E1124 12:01:05.084643 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:05 crc kubenswrapper[4930]: I1124 12:01:05.084488 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:05 crc kubenswrapper[4930]: I1124 12:01:05.084488 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:05 crc kubenswrapper[4930]: E1124 12:01:05.084710 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:05 crc kubenswrapper[4930]: E1124 12:01:05.084891 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:06 crc kubenswrapper[4930]: I1124 12:01:06.083676 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:06 crc kubenswrapper[4930]: E1124 12:01:06.083858 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:07 crc kubenswrapper[4930]: I1124 12:01:07.083548 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:07 crc kubenswrapper[4930]: I1124 12:01:07.083587 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:07 crc kubenswrapper[4930]: I1124 12:01:07.083678 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:07 crc kubenswrapper[4930]: E1124 12:01:07.083816 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:07 crc kubenswrapper[4930]: E1124 12:01:07.083960 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:07 crc kubenswrapper[4930]: E1124 12:01:07.084063 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:07 crc kubenswrapper[4930]: I1124 12:01:07.665463 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5lvxv_68c34ffc-f1cd-4828-b83c-22bd0c02f364/kube-multus/1.log" Nov 24 12:01:07 crc kubenswrapper[4930]: I1124 12:01:07.666117 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5lvxv_68c34ffc-f1cd-4828-b83c-22bd0c02f364/kube-multus/0.log" Nov 24 12:01:07 crc kubenswrapper[4930]: I1124 12:01:07.666179 4930 generic.go:334] "Generic (PLEG): container finished" podID="68c34ffc-f1cd-4828-b83c-22bd0c02f364" containerID="c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec" exitCode=1 Nov 24 12:01:07 crc kubenswrapper[4930]: I1124 12:01:07.666217 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5lvxv" event={"ID":"68c34ffc-f1cd-4828-b83c-22bd0c02f364","Type":"ContainerDied","Data":"c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec"} Nov 24 12:01:07 crc kubenswrapper[4930]: I1124 12:01:07.666260 4930 scope.go:117] "RemoveContainer" containerID="d91450c390f2de135fa7c0b64f42ce840e0026826fbeb8c62d6d96c2086f3336" Nov 24 12:01:07 crc kubenswrapper[4930]: I1124 12:01:07.666752 4930 scope.go:117] "RemoveContainer" containerID="c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec" Nov 24 12:01:07 crc kubenswrapper[4930]: E1124 12:01:07.666956 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5lvxv_openshift-multus(68c34ffc-f1cd-4828-b83c-22bd0c02f364)\"" pod="openshift-multus/multus-5lvxv" podUID="68c34ffc-f1cd-4828-b83c-22bd0c02f364" Nov 24 12:01:08 crc kubenswrapper[4930]: I1124 12:01:08.083830 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:08 crc kubenswrapper[4930]: E1124 12:01:08.083963 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:08 crc kubenswrapper[4930]: I1124 12:01:08.670369 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5lvxv_68c34ffc-f1cd-4828-b83c-22bd0c02f364/kube-multus/1.log" Nov 24 12:01:09 crc kubenswrapper[4930]: I1124 12:01:09.084496 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:09 crc kubenswrapper[4930]: I1124 12:01:09.084506 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:09 crc kubenswrapper[4930]: I1124 12:01:09.084496 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:09 crc kubenswrapper[4930]: E1124 12:01:09.084626 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:09 crc kubenswrapper[4930]: E1124 12:01:09.084899 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:09 crc kubenswrapper[4930]: E1124 12:01:09.084964 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:10 crc kubenswrapper[4930]: I1124 12:01:10.084082 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:10 crc kubenswrapper[4930]: E1124 12:01:10.084911 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:10 crc kubenswrapper[4930]: I1124 12:01:10.085476 4930 scope.go:117] "RemoveContainer" containerID="56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051" Nov 24 12:01:10 crc kubenswrapper[4930]: I1124 12:01:10.678735 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/3.log" Nov 24 12:01:10 crc kubenswrapper[4930]: I1124 12:01:10.681472 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerStarted","Data":"5cc9ac8563be395cd2ee4f6dad8b594527f757b07855ece812c56b6e6917654f"} Nov 24 12:01:10 crc kubenswrapper[4930]: I1124 12:01:10.682362 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 12:01:10 crc kubenswrapper[4930]: I1124 12:01:10.706123 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podStartSLOduration=96.706107323 podStartE2EDuration="1m36.706107323s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:10.705189838 +0000 UTC m=+117.319517788" watchObservedRunningTime="2025-11-24 12:01:10.706107323 +0000 UTC m=+117.320435283" Nov 24 12:01:11 crc kubenswrapper[4930]: I1124 12:01:11.060492 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r4jtv"] Nov 24 12:01:11 crc kubenswrapper[4930]: I1124 12:01:11.060647 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:11 crc kubenswrapper[4930]: E1124 12:01:11.060760 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:11 crc kubenswrapper[4930]: I1124 12:01:11.084407 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:11 crc kubenswrapper[4930]: I1124 12:01:11.084446 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:11 crc kubenswrapper[4930]: I1124 12:01:11.084495 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:11 crc kubenswrapper[4930]: E1124 12:01:11.084550 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:11 crc kubenswrapper[4930]: E1124 12:01:11.084619 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:11 crc kubenswrapper[4930]: E1124 12:01:11.084672 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:13 crc kubenswrapper[4930]: I1124 12:01:13.083995 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:13 crc kubenswrapper[4930]: I1124 12:01:13.084034 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:13 crc kubenswrapper[4930]: I1124 12:01:13.084102 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:13 crc kubenswrapper[4930]: E1124 12:01:13.084145 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:13 crc kubenswrapper[4930]: I1124 12:01:13.084109 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:13 crc kubenswrapper[4930]: E1124 12:01:13.084249 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:13 crc kubenswrapper[4930]: E1124 12:01:13.084381 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:13 crc kubenswrapper[4930]: E1124 12:01:13.084424 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:14 crc kubenswrapper[4930]: E1124 12:01:14.122690 4930 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 12:01:14 crc kubenswrapper[4930]: E1124 12:01:14.196830 4930 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 12:01:15 crc kubenswrapper[4930]: I1124 12:01:15.084412 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:15 crc kubenswrapper[4930]: I1124 12:01:15.084469 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:15 crc kubenswrapper[4930]: I1124 12:01:15.084497 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:15 crc kubenswrapper[4930]: I1124 12:01:15.084432 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:15 crc kubenswrapper[4930]: E1124 12:01:15.084597 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:15 crc kubenswrapper[4930]: E1124 12:01:15.084671 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:15 crc kubenswrapper[4930]: E1124 12:01:15.084733 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:15 crc kubenswrapper[4930]: E1124 12:01:15.084950 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:17 crc kubenswrapper[4930]: I1124 12:01:17.083529 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:17 crc kubenswrapper[4930]: I1124 12:01:17.083637 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:17 crc kubenswrapper[4930]: I1124 12:01:17.083575 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:17 crc kubenswrapper[4930]: I1124 12:01:17.083575 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:17 crc kubenswrapper[4930]: E1124 12:01:17.083751 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:17 crc kubenswrapper[4930]: E1124 12:01:17.083829 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:17 crc kubenswrapper[4930]: E1124 12:01:17.083683 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:17 crc kubenswrapper[4930]: E1124 12:01:17.083920 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:18 crc kubenswrapper[4930]: I1124 12:01:18.084230 4930 scope.go:117] "RemoveContainer" containerID="c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec" Nov 24 12:01:18 crc kubenswrapper[4930]: I1124 12:01:18.706733 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5lvxv_68c34ffc-f1cd-4828-b83c-22bd0c02f364/kube-multus/1.log" Nov 24 12:01:18 crc kubenswrapper[4930]: I1124 12:01:18.706800 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5lvxv" event={"ID":"68c34ffc-f1cd-4828-b83c-22bd0c02f364","Type":"ContainerStarted","Data":"58dd67e4f1a6eee0dddd3efb328f11e571b324eaebb707f289abac0be5b3a1d6"} Nov 24 12:01:19 crc kubenswrapper[4930]: I1124 12:01:19.083856 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:19 crc kubenswrapper[4930]: I1124 12:01:19.083931 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:19 crc kubenswrapper[4930]: I1124 12:01:19.084004 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:19 crc kubenswrapper[4930]: E1124 12:01:19.084046 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:19 crc kubenswrapper[4930]: I1124 12:01:19.083908 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:19 crc kubenswrapper[4930]: E1124 12:01:19.084155 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:19 crc kubenswrapper[4930]: E1124 12:01:19.084265 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:19 crc kubenswrapper[4930]: E1124 12:01:19.084394 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:19 crc kubenswrapper[4930]: E1124 12:01:19.198403 4930 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 12:01:21 crc kubenswrapper[4930]: I1124 12:01:21.083939 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:21 crc kubenswrapper[4930]: I1124 12:01:21.083939 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:21 crc kubenswrapper[4930]: E1124 12:01:21.084490 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:21 crc kubenswrapper[4930]: I1124 12:01:21.084044 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:21 crc kubenswrapper[4930]: E1124 12:01:21.084728 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:21 crc kubenswrapper[4930]: E1124 12:01:21.084614 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:21 crc kubenswrapper[4930]: I1124 12:01:21.083975 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:21 crc kubenswrapper[4930]: E1124 12:01:21.084816 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:23 crc kubenswrapper[4930]: I1124 12:01:23.084265 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:23 crc kubenswrapper[4930]: E1124 12:01:23.084437 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4jtv" podUID="96ced043-6cad-4f17-8648-624f36bf14f1" Nov 24 12:01:23 crc kubenswrapper[4930]: I1124 12:01:23.084508 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:23 crc kubenswrapper[4930]: I1124 12:01:23.084282 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:23 crc kubenswrapper[4930]: E1124 12:01:23.084703 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:01:23 crc kubenswrapper[4930]: E1124 12:01:23.084771 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:01:23 crc kubenswrapper[4930]: I1124 12:01:23.084827 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:23 crc kubenswrapper[4930]: E1124 12:01:23.084983 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:01:25 crc kubenswrapper[4930]: I1124 12:01:25.084590 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:25 crc kubenswrapper[4930]: I1124 12:01:25.084597 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:25 crc kubenswrapper[4930]: I1124 12:01:25.085365 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:25 crc kubenswrapper[4930]: I1124 12:01:25.085443 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:25 crc kubenswrapper[4930]: I1124 12:01:25.086559 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 12:01:25 crc kubenswrapper[4930]: I1124 12:01:25.087390 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 12:01:25 crc kubenswrapper[4930]: I1124 12:01:25.087706 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 12:01:25 crc kubenswrapper[4930]: I1124 12:01:25.087775 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 12:01:25 crc kubenswrapper[4930]: I1124 12:01:25.088133 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 12:01:25 crc kubenswrapper[4930]: I1124 12:01:25.088632 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.373796 4930 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.417187 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xhvvt"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.417741 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.417822 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kw8wv"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.418450 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.422106 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.422586 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.422903 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dkr44"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.422966 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.423198 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.423505 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.423994 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.424354 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.424441 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.424650 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.424707 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jl279"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.424799 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.425056 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.425069 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.425381 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.425624 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.425437 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.430207 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.430434 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.430716 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.436571 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.436786 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.436940 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.437130 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.437275 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.437483 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.437623 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.437993 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.438477 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.440524 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.440593 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.440658 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.440682 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.441636 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.441721 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.441797 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.441866 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.441950 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.442051 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.442225 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.443457 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.444047 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.447085 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.449433 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.449619 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.449682 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.449806 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.449933 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.449962 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.449869 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.449891 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.450024 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.450043 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.450241 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.450354 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.450598 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5bktz"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.450978 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.451418 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.452752 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.452894 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.453176 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.453906 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.455395 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-l4vrl"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.460453 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.461210 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l4vrl" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.463451 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.463626 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.463657 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.463802 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.465217 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.471454 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.471924 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.472097 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.472260 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.472676 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.472960 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.472988 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.473094 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.473122 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.474941 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s42xf"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.475089 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.475347 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.475398 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qfqmk"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.475840 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.475910 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.475413 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.476522 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z7lsz"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.476631 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.476797 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.476894 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.477335 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.478016 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.478613 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fpv5v"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.478970 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kw8wv"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.479033 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.480033 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xhvvt"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.482635 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.483155 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.483516 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.483936 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.487070 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m48vx"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.487673 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.488088 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.488346 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.488411 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.490253 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.490653 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.492925 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.493367 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.493622 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.493750 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.493861 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.493981 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.494085 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.494185 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.494322 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.494443 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.494566 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.494623 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.494735 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.494829 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.494955 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.495243 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-226nn"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.495850 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh578"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.496172 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.496432 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.496739 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497092 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497286 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497309 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497374 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497524 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497599 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497664 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497702 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497745 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497859 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497907 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497863 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.498004 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.498030 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.497558 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.498140 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.498155 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.498186 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.498323 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.503983 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.506452 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.508387 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.509120 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.509367 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.509814 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.510444 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.515556 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.515636 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.517355 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.523704 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.535575 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.536207 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6sjlm"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.537206 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.537272 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.537687 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.537704 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.538589 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nglrn"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.539332 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.539898 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jkm8r"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.539977 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.540409 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.541914 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.542432 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.543159 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.543477 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.543609 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.546053 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.546295 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.547082 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9d78l"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.547759 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.548086 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.550745 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h4f7j"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.551688 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.553138 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jl279"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.554693 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5bktz"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.556424 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dkr44"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.557808 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-4xdgm"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.558712 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.559402 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.561610 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.561783 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.562797 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.564991 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l4vrl"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.566024 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.566839 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qfqmk"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.568132 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-226nn"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.569375 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6sjlm"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.570702 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh578"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.572237 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.573271 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.574471 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.575751 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m48vx"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.578473 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.579766 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.580082 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.581976 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.583266 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s42xf"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.584956 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bkp4v"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.585604 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bkp4v" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.586244 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nglrn"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.587345 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.589374 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9d78l"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.590275 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z7lsz"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.591515 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.592913 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.594235 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.595616 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.602088 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.608892 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-client-ca\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.608930 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc6dx\" (UniqueName: \"kubernetes.io/projected/3052750d-7ce6-4fee-8b97-f18ea3be457d-kube-api-access-xc6dx\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.608951 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwmnr\" (UniqueName: \"kubernetes.io/projected/cc9af663-c7f1-485e-a7fc-709da901e9e1-kube-api-access-mwmnr\") pod \"openshift-config-operator-7777fb866f-jl279\" (UID: \"cc9af663-c7f1-485e-a7fc-709da901e9e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.608966 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-config\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.608984 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1ab09e9-91ba-481e-b364-12e2a90bed8e-metrics-tls\") pod \"dns-operator-744455d44c-s42xf\" (UID: \"b1ab09e9-91ba-481e-b364-12e2a90bed8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.608998 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-console-config\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609013 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89pf7\" (UniqueName: \"kubernetes.io/projected/7ef6223b-8ceb-4a44-b845-985899aff96b-kube-api-access-89pf7\") pod \"openshift-apiserver-operator-796bbdcf4f-dcf6j\" (UID: \"7ef6223b-8ceb-4a44-b845-985899aff96b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609035 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e779576-9402-40c2-bdf6-a62360dc60b3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lpjnn\" (UID: \"4e779576-9402-40c2-bdf6-a62360dc60b3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609055 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9af663-c7f1-485e-a7fc-709da901e9e1-serving-cert\") pod \"openshift-config-operator-7777fb866f-jl279\" (UID: \"cc9af663-c7f1-485e-a7fc-709da901e9e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609075 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1764ecb0-77fe-4ff4-9106-6860622b2491-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtd74\" (UID: \"1764ecb0-77fe-4ff4-9106-6860622b2491\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609095 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3532e932-4436-4950-8f1d-b622a393356e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609114 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-service-ca\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609128 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-oauth-serving-cert\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609146 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9dd2e0b-db34-4962-a370-03deea21911a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609163 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-etcd-ca\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609178 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6639778e-480a-4822-90cb-48d2e976d509-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609193 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-auth-proxy-config\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609207 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3052750d-7ce6-4fee-8b97-f18ea3be457d-machine-approver-tls\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609220 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-serving-cert\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609236 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss5nl\" (UniqueName: \"kubernetes.io/projected/3532e932-4436-4950-8f1d-b622a393356e-kube-api-access-ss5nl\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609259 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71dbe23d-480e-43aa-8106-b19ae5b98734-bound-sa-token\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609274 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tqwh\" (UniqueName: \"kubernetes.io/projected/f9dd2e0b-db34-4962-a370-03deea21911a-kube-api-access-5tqwh\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609287 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3052750d-7ce6-4fee-8b97-f18ea3be457d-config\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609310 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ef6223b-8ceb-4a44-b845-985899aff96b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dcf6j\" (UID: \"7ef6223b-8ceb-4a44-b845-985899aff96b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609324 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3532e932-4436-4950-8f1d-b622a393356e-serving-cert\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609338 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dd2e0b-db34-4962-a370-03deea21911a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609353 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9dd2e0b-db34-4962-a370-03deea21911a-audit-dir\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609366 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-etcd-service-ca\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609382 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e779576-9402-40c2-bdf6-a62360dc60b3-config\") pod \"kube-apiserver-operator-766d6c64bb-lpjnn\" (UID: \"4e779576-9402-40c2-bdf6-a62360dc60b3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609396 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97bl\" (UniqueName: \"kubernetes.io/projected/1764ecb0-77fe-4ff4-9106-6860622b2491-kube-api-access-z97bl\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtd74\" (UID: \"1764ecb0-77fe-4ff4-9106-6860622b2491\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609413 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rvd\" (UniqueName: \"kubernetes.io/projected/507084c7-1280-4943-bff6-497f1dc21c0a-kube-api-access-p4rvd\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609428 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eada43a-ea1e-4565-a042-716f030ba99d-serving-cert\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609443 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cc9af663-c7f1-485e-a7fc-709da901e9e1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jl279\" (UID: \"cc9af663-c7f1-485e-a7fc-709da901e9e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609459 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-oauth-config\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609474 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/080a5d44-2fa6-4e44-bd77-59047f85aea9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjpj4\" (UID: \"080a5d44-2fa6-4e44-bd77-59047f85aea9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609489 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e779576-9402-40c2-bdf6-a62360dc60b3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lpjnn\" (UID: \"4e779576-9402-40c2-bdf6-a62360dc60b3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609504 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl655\" (UniqueName: \"kubernetes.io/projected/8eada43a-ea1e-4565-a042-716f030ba99d-kube-api-access-bl655\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609518 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3052750d-7ce6-4fee-8b97-f18ea3be457d-auth-proxy-config\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609557 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6639778e-480a-4822-90cb-48d2e976d509-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609572 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-proxy-tls\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609587 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28gps\" (UniqueName: \"kubernetes.io/projected/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-kube-api-access-28gps\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609608 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-etcd-client\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609624 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrzc\" (UniqueName: \"kubernetes.io/projected/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-kube-api-access-dzrzc\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609640 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609656 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1764ecb0-77fe-4ff4-9106-6860622b2491-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtd74\" (UID: \"1764ecb0-77fe-4ff4-9106-6860622b2491\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609672 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgzt\" (UniqueName: \"kubernetes.io/projected/084904b5-0321-4d4f-b26a-48c5950a5d98-kube-api-access-rqgzt\") pod \"package-server-manager-789f6589d5-58bfl\" (UID: \"084904b5-0321-4d4f-b26a-48c5950a5d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609690 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-images\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609706 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/084904b5-0321-4d4f-b26a-48c5950a5d98-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-58bfl\" (UID: \"084904b5-0321-4d4f-b26a-48c5950a5d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609723 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qh578\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609739 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qh578\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609762 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6639778e-480a-4822-90cb-48d2e976d509-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609778 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnl9t\" (UniqueName: \"kubernetes.io/projected/6639778e-480a-4822-90cb-48d2e976d509-kube-api-access-gnl9t\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609793 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/71dbe23d-480e-43aa-8106-b19ae5b98734-metrics-tls\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609809 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwx47\" (UniqueName: \"kubernetes.io/projected/71dbe23d-480e-43aa-8106-b19ae5b98734-kube-api-access-cwx47\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609833 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9dd2e0b-db34-4962-a370-03deea21911a-encryption-config\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609847 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3532e932-4436-4950-8f1d-b622a393356e-config\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609866 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkdt7\" (UniqueName: \"kubernetes.io/projected/080a5d44-2fa6-4e44-bd77-59047f85aea9-kube-api-access-lkdt7\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjpj4\" (UID: \"080a5d44-2fa6-4e44-bd77-59047f85aea9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609881 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9dd2e0b-db34-4962-a370-03deea21911a-audit-policies\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609896 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-serving-cert\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609909 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9dd2e0b-db34-4962-a370-03deea21911a-etcd-client\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609924 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-client-ca\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609940 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4s7r\" (UniqueName: \"kubernetes.io/projected/9b68a023-c6c2-458f-a714-a084b12a83cc-kube-api-access-t4s7r\") pod \"migrator-59844c95c7-qdqq8\" (UID: \"9b68a023-c6c2-458f-a714-a084b12a83cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609956 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-serving-cert\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609974 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59xgd\" (UniqueName: \"kubernetes.io/projected/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-kube-api-access-59xgd\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.609992 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4nfq\" (UniqueName: \"kubernetes.io/projected/b1ab09e9-91ba-481e-b364-12e2a90bed8e-kube-api-access-j4nfq\") pod \"dns-operator-744455d44c-s42xf\" (UID: \"b1ab09e9-91ba-481e-b364-12e2a90bed8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.610007 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ef6223b-8ceb-4a44-b845-985899aff96b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dcf6j\" (UID: \"7ef6223b-8ceb-4a44-b845-985899aff96b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.610026 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71dbe23d-480e-43aa-8106-b19ae5b98734-trusted-ca\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.610040 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-config\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.610055 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cjm5\" (UniqueName: \"kubernetes.io/projected/f8908bf3-e171-4859-80c7-baa64ca6e11c-kube-api-access-9cjm5\") pod \"marketplace-operator-79b997595-qh578\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.610071 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-trusted-ca-bundle\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.610085 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9dd2e0b-db34-4962-a370-03deea21911a-serving-cert\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.610103 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-config\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.610119 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3532e932-4436-4950-8f1d-b622a393356e-service-ca-bundle\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.610863 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.610891 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h4f7j"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.612284 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4xdgm"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.614795 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.615039 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fpv5v"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.615875 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bkp4v"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.622820 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-65dmj"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.623875 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.625520 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.647795 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.659412 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.680302 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.699942 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711134 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-service-ca\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711164 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-oauth-serving-cert\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711185 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9dd2e0b-db34-4962-a370-03deea21911a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711213 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6639778e-480a-4822-90cb-48d2e976d509-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711231 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-auth-proxy-config\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711254 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-etcd-ca\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711277 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3052750d-7ce6-4fee-8b97-f18ea3be457d-machine-approver-tls\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711293 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-serving-cert\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711324 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss5nl\" (UniqueName: \"kubernetes.io/projected/3532e932-4436-4950-8f1d-b622a393356e-kube-api-access-ss5nl\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711357 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71dbe23d-480e-43aa-8106-b19ae5b98734-bound-sa-token\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711379 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tqwh\" (UniqueName: \"kubernetes.io/projected/f9dd2e0b-db34-4962-a370-03deea21911a-kube-api-access-5tqwh\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711394 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3052750d-7ce6-4fee-8b97-f18ea3be457d-config\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711418 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ef6223b-8ceb-4a44-b845-985899aff96b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dcf6j\" (UID: \"7ef6223b-8ceb-4a44-b845-985899aff96b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711434 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3532e932-4436-4950-8f1d-b622a393356e-serving-cert\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711451 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dd2e0b-db34-4962-a370-03deea21911a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711467 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9dd2e0b-db34-4962-a370-03deea21911a-audit-dir\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711486 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-etcd-service-ca\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711506 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e779576-9402-40c2-bdf6-a62360dc60b3-config\") pod \"kube-apiserver-operator-766d6c64bb-lpjnn\" (UID: \"4e779576-9402-40c2-bdf6-a62360dc60b3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711523 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97bl\" (UniqueName: \"kubernetes.io/projected/1764ecb0-77fe-4ff4-9106-6860622b2491-kube-api-access-z97bl\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtd74\" (UID: \"1764ecb0-77fe-4ff4-9106-6860622b2491\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711562 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cc9af663-c7f1-485e-a7fc-709da901e9e1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jl279\" (UID: \"cc9af663-c7f1-485e-a7fc-709da901e9e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711584 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rvd\" (UniqueName: \"kubernetes.io/projected/507084c7-1280-4943-bff6-497f1dc21c0a-kube-api-access-p4rvd\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711602 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eada43a-ea1e-4565-a042-716f030ba99d-serving-cert\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711618 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-oauth-config\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711637 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/080a5d44-2fa6-4e44-bd77-59047f85aea9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjpj4\" (UID: \"080a5d44-2fa6-4e44-bd77-59047f85aea9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711655 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e779576-9402-40c2-bdf6-a62360dc60b3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lpjnn\" (UID: \"4e779576-9402-40c2-bdf6-a62360dc60b3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711672 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl655\" (UniqueName: \"kubernetes.io/projected/8eada43a-ea1e-4565-a042-716f030ba99d-kube-api-access-bl655\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711688 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrzc\" (UniqueName: \"kubernetes.io/projected/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-kube-api-access-dzrzc\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711702 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711716 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3052750d-7ce6-4fee-8b97-f18ea3be457d-auth-proxy-config\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711732 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6639778e-480a-4822-90cb-48d2e976d509-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711749 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-proxy-tls\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711765 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28gps\" (UniqueName: \"kubernetes.io/projected/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-kube-api-access-28gps\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711780 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-etcd-client\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711798 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1764ecb0-77fe-4ff4-9106-6860622b2491-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtd74\" (UID: \"1764ecb0-77fe-4ff4-9106-6860622b2491\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711814 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-images\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711830 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/084904b5-0321-4d4f-b26a-48c5950a5d98-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-58bfl\" (UID: \"084904b5-0321-4d4f-b26a-48c5950a5d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711846 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgzt\" (UniqueName: \"kubernetes.io/projected/084904b5-0321-4d4f-b26a-48c5950a5d98-kube-api-access-rqgzt\") pod \"package-server-manager-789f6589d5-58bfl\" (UID: \"084904b5-0321-4d4f-b26a-48c5950a5d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711885 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qh578\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711903 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qh578\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711929 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwx47\" (UniqueName: \"kubernetes.io/projected/71dbe23d-480e-43aa-8106-b19ae5b98734-kube-api-access-cwx47\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711944 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6639778e-480a-4822-90cb-48d2e976d509-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711964 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnl9t\" (UniqueName: \"kubernetes.io/projected/6639778e-480a-4822-90cb-48d2e976d509-kube-api-access-gnl9t\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.711979 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/71dbe23d-480e-43aa-8106-b19ae5b98734-metrics-tls\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712004 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9dd2e0b-db34-4962-a370-03deea21911a-encryption-config\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712020 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3532e932-4436-4950-8f1d-b622a393356e-config\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712038 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkdt7\" (UniqueName: \"kubernetes.io/projected/080a5d44-2fa6-4e44-bd77-59047f85aea9-kube-api-access-lkdt7\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjpj4\" (UID: \"080a5d44-2fa6-4e44-bd77-59047f85aea9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712052 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9dd2e0b-db34-4962-a370-03deea21911a-audit-policies\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712067 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-serving-cert\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712080 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9dd2e0b-db34-4962-a370-03deea21911a-etcd-client\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712094 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-client-ca\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712111 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4s7r\" (UniqueName: \"kubernetes.io/projected/9b68a023-c6c2-458f-a714-a084b12a83cc-kube-api-access-t4s7r\") pod \"migrator-59844c95c7-qdqq8\" (UID: \"9b68a023-c6c2-458f-a714-a084b12a83cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712125 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-serving-cert\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712140 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59xgd\" (UniqueName: \"kubernetes.io/projected/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-kube-api-access-59xgd\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712156 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4nfq\" (UniqueName: \"kubernetes.io/projected/b1ab09e9-91ba-481e-b364-12e2a90bed8e-kube-api-access-j4nfq\") pod \"dns-operator-744455d44c-s42xf\" (UID: \"b1ab09e9-91ba-481e-b364-12e2a90bed8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712173 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ef6223b-8ceb-4a44-b845-985899aff96b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dcf6j\" (UID: \"7ef6223b-8ceb-4a44-b845-985899aff96b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712193 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71dbe23d-480e-43aa-8106-b19ae5b98734-trusted-ca\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712195 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3052750d-7ce6-4fee-8b97-f18ea3be457d-config\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712209 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-config\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712277 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cjm5\" (UniqueName: \"kubernetes.io/projected/f8908bf3-e171-4859-80c7-baa64ca6e11c-kube-api-access-9cjm5\") pod \"marketplace-operator-79b997595-qh578\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712299 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9dd2e0b-db34-4962-a370-03deea21911a-serving-cert\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712317 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-config\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712340 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-trusted-ca-bundle\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712357 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3532e932-4436-4950-8f1d-b622a393356e-service-ca-bundle\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712381 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-client-ca\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712399 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc6dx\" (UniqueName: \"kubernetes.io/projected/3052750d-7ce6-4fee-8b97-f18ea3be457d-kube-api-access-xc6dx\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712403 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9dd2e0b-db34-4962-a370-03deea21911a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712421 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwmnr\" (UniqueName: \"kubernetes.io/projected/cc9af663-c7f1-485e-a7fc-709da901e9e1-kube-api-access-mwmnr\") pod \"openshift-config-operator-7777fb866f-jl279\" (UID: \"cc9af663-c7f1-485e-a7fc-709da901e9e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712413 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-service-ca\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712461 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1ab09e9-91ba-481e-b364-12e2a90bed8e-metrics-tls\") pod \"dns-operator-744455d44c-s42xf\" (UID: \"b1ab09e9-91ba-481e-b364-12e2a90bed8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712492 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-console-config\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712516 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-config\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712524 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-etcd-service-ca\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712597 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89pf7\" (UniqueName: \"kubernetes.io/projected/7ef6223b-8ceb-4a44-b845-985899aff96b-kube-api-access-89pf7\") pod \"openshift-apiserver-operator-796bbdcf4f-dcf6j\" (UID: \"7ef6223b-8ceb-4a44-b845-985899aff96b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712635 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e779576-9402-40c2-bdf6-a62360dc60b3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lpjnn\" (UID: \"4e779576-9402-40c2-bdf6-a62360dc60b3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712659 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9af663-c7f1-485e-a7fc-709da901e9e1-serving-cert\") pod \"openshift-config-operator-7777fb866f-jl279\" (UID: \"cc9af663-c7f1-485e-a7fc-709da901e9e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712686 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1764ecb0-77fe-4ff4-9106-6860622b2491-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtd74\" (UID: \"1764ecb0-77fe-4ff4-9106-6860622b2491\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712712 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3532e932-4436-4950-8f1d-b622a393356e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712795 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-auth-proxy-config\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712815 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-etcd-ca\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712957 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dd2e0b-db34-4962-a370-03deea21911a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.712999 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9dd2e0b-db34-4962-a370-03deea21911a-audit-dir\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.714136 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3532e932-4436-4950-8f1d-b622a393356e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.714225 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9dd2e0b-db34-4962-a370-03deea21911a-audit-policies\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.714404 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-config\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.714824 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cc9af663-c7f1-485e-a7fc-709da901e9e1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jl279\" (UID: \"cc9af663-c7f1-485e-a7fc-709da901e9e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.715002 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-oauth-serving-cert\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.715285 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6639778e-480a-4822-90cb-48d2e976d509-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.715893 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e779576-9402-40c2-bdf6-a62360dc60b3-config\") pod \"kube-apiserver-operator-766d6c64bb-lpjnn\" (UID: \"4e779576-9402-40c2-bdf6-a62360dc60b3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.716764 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3052750d-7ce6-4fee-8b97-f18ea3be457d-auth-proxy-config\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.717068 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-console-config\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.717184 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-client-ca\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.717322 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ef6223b-8ceb-4a44-b845-985899aff96b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dcf6j\" (UID: \"7ef6223b-8ceb-4a44-b845-985899aff96b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.717532 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-config\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.717886 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-config\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.718091 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-client-ca\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.719262 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3532e932-4436-4950-8f1d-b622a393356e-service-ca-bundle\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.719765 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.719805 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1764ecb0-77fe-4ff4-9106-6860622b2491-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtd74\" (UID: \"1764ecb0-77fe-4ff4-9106-6860622b2491\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.720941 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-trusted-ca-bundle\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.721397 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71dbe23d-480e-43aa-8106-b19ae5b98734-trusted-ca\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.721763 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9dd2e0b-db34-4962-a370-03deea21911a-serving-cert\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.721766 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3532e932-4436-4950-8f1d-b622a393356e-config\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.721857 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.721980 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eada43a-ea1e-4565-a042-716f030ba99d-serving-cert\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.722342 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1764ecb0-77fe-4ff4-9106-6860622b2491-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtd74\" (UID: \"1764ecb0-77fe-4ff4-9106-6860622b2491\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.722838 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1ab09e9-91ba-481e-b364-12e2a90bed8e-metrics-tls\") pod \"dns-operator-744455d44c-s42xf\" (UID: \"b1ab09e9-91ba-481e-b364-12e2a90bed8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.722975 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-etcd-client\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.724204 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-serving-cert\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.724424 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e779576-9402-40c2-bdf6-a62360dc60b3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lpjnn\" (UID: \"4e779576-9402-40c2-bdf6-a62360dc60b3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.724453 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-serving-cert\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.724514 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9af663-c7f1-485e-a7fc-709da901e9e1-serving-cert\") pod \"openshift-config-operator-7777fb866f-jl279\" (UID: \"cc9af663-c7f1-485e-a7fc-709da901e9e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.724844 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-serving-cert\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.725829 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-oauth-config\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.727434 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6639778e-480a-4822-90cb-48d2e976d509-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.729632 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3052750d-7ce6-4fee-8b97-f18ea3be457d-machine-approver-tls\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.730196 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/71dbe23d-480e-43aa-8106-b19ae5b98734-metrics-tls\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.730517 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b6v7j"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.731007 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9dd2e0b-db34-4962-a370-03deea21911a-encryption-config\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.731482 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9dd2e0b-db34-4962-a370-03deea21911a-etcd-client\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.731526 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3532e932-4436-4950-8f1d-b622a393356e-serving-cert\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.731895 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.735395 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ef6223b-8ceb-4a44-b845-985899aff96b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dcf6j\" (UID: \"7ef6223b-8ceb-4a44-b845-985899aff96b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.741740 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.742687 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b6v7j"] Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.749042 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-images\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.759758 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.780294 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.790911 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-proxy-tls\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.799964 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.820207 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.841098 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.848342 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qh578\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.868288 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.878615 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qh578\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.879791 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.900226 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.911255 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/080a5d44-2fa6-4e44-bd77-59047f85aea9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjpj4\" (UID: \"080a5d44-2fa6-4e44-bd77-59047f85aea9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.920697 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.939431 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.960718 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 12:01:31 crc kubenswrapper[4930]: I1124 12:01:31.979512 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.000605 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.020085 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.039946 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.059668 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.079441 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.099977 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.146748 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.160315 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.179170 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.199432 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.220252 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.240105 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.259800 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.280780 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.301076 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.320429 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.330052 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/084904b5-0321-4d4f-b26a-48c5950a5d98-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-58bfl\" (UID: \"084904b5-0321-4d4f-b26a-48c5950a5d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.339947 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.380292 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.399648 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.420721 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.440206 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.460163 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.480071 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.499631 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.520874 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.538816 4930 request.go:700] Waited for 1.000794695s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0 Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.541047 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.559995 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.580319 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.600870 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.620848 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.641012 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.659767 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.680773 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.699841 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.721450 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.740722 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.760174 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.780435 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.799738 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.820657 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.849268 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.860338 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.880044 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.894888 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.899958 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.920335 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.940624 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.960775 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 12:01:32 crc kubenswrapper[4930]: I1124 12:01:32.979682 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.005910 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.020386 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.040712 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.060095 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.080505 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.100702 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.127899 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.139883 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.159812 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.180122 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.200451 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.219967 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.240136 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.260476 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.279206 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.300960 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.321405 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.340051 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.360931 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.380661 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.400605 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.436634 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6639778e-480a-4822-90cb-48d2e976d509-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.459202 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71dbe23d-480e-43aa-8106-b19ae5b98734-bound-sa-token\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.475138 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tqwh\" (UniqueName: \"kubernetes.io/projected/f9dd2e0b-db34-4962-a370-03deea21911a-kube-api-access-5tqwh\") pod \"apiserver-7bbb656c7d-hrls7\" (UID: \"f9dd2e0b-db34-4962-a370-03deea21911a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.495444 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss5nl\" (UniqueName: \"kubernetes.io/projected/3532e932-4436-4950-8f1d-b622a393356e-kube-api-access-ss5nl\") pod \"authentication-operator-69f744f599-m48vx\" (UID: \"3532e932-4436-4950-8f1d-b622a393356e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.516384 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwmnr\" (UniqueName: \"kubernetes.io/projected/cc9af663-c7f1-485e-a7fc-709da901e9e1-kube-api-access-mwmnr\") pod \"openshift-config-operator-7777fb866f-jl279\" (UID: \"cc9af663-c7f1-485e-a7fc-709da901e9e1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.535355 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cjm5\" (UniqueName: \"kubernetes.io/projected/f8908bf3-e171-4859-80c7-baa64ca6e11c-kube-api-access-9cjm5\") pod \"marketplace-operator-79b997595-qh578\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.539085 4930 request.go:700] Waited for 1.825998159s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.564654 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgzt\" (UniqueName: \"kubernetes.io/projected/084904b5-0321-4d4f-b26a-48c5950a5d98-kube-api-access-rqgzt\") pod \"package-server-manager-789f6589d5-58bfl\" (UID: \"084904b5-0321-4d4f-b26a-48c5950a5d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.574611 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrzc\" (UniqueName: \"kubernetes.io/projected/e6f9bbba-9d3d-4aec-8d86-e98fde2606ca-kube-api-access-dzrzc\") pod \"etcd-operator-b45778765-z7lsz\" (UID: \"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.587093 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.588772 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.592585 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwx47\" (UniqueName: \"kubernetes.io/projected/71dbe23d-480e-43aa-8106-b19ae5b98734-kube-api-access-cwx47\") pod \"ingress-operator-5b745b69d9-55nsz\" (UID: \"71dbe23d-480e-43aa-8106-b19ae5b98734\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.611258 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.621629 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97bl\" (UniqueName: \"kubernetes.io/projected/1764ecb0-77fe-4ff4-9106-6860622b2491-kube-api-access-z97bl\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtd74\" (UID: \"1764ecb0-77fe-4ff4-9106-6860622b2491\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.640283 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rvd\" (UniqueName: \"kubernetes.io/projected/507084c7-1280-4943-bff6-497f1dc21c0a-kube-api-access-p4rvd\") pod \"console-f9d7485db-qfqmk\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.657744 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28gps\" (UniqueName: \"kubernetes.io/projected/c4ccb41c-8cdd-4751-8012-49fae4dc2bcb-kube-api-access-28gps\") pod \"machine-config-operator-74547568cd-226nn\" (UID: \"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.681490 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.684610 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnl9t\" (UniqueName: \"kubernetes.io/projected/6639778e-480a-4822-90cb-48d2e976d509-kube-api-access-gnl9t\") pod \"cluster-image-registry-operator-dc59b4c8b-kz2bg\" (UID: \"6639778e-480a-4822-90cb-48d2e976d509\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.698226 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59xgd\" (UniqueName: \"kubernetes.io/projected/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-kube-api-access-59xgd\") pod \"route-controller-manager-6576b87f9c-4ksnz\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.717178 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.723915 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4s7r\" (UniqueName: \"kubernetes.io/projected/9b68a023-c6c2-458f-a714-a084b12a83cc-kube-api-access-t4s7r\") pod \"migrator-59844c95c7-qdqq8\" (UID: \"9b68a023-c6c2-458f-a714-a084b12a83cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.739802 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.747949 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4nfq\" (UniqueName: \"kubernetes.io/projected/b1ab09e9-91ba-481e-b364-12e2a90bed8e-kube-api-access-j4nfq\") pod \"dns-operator-744455d44c-s42xf\" (UID: \"b1ab09e9-91ba-481e-b364-12e2a90bed8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.767411 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e779576-9402-40c2-bdf6-a62360dc60b3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lpjnn\" (UID: \"4e779576-9402-40c2-bdf6-a62360dc60b3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.773072 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.779424 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.780755 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89pf7\" (UniqueName: \"kubernetes.io/projected/7ef6223b-8ceb-4a44-b845-985899aff96b-kube-api-access-89pf7\") pod \"openshift-apiserver-operator-796bbdcf4f-dcf6j\" (UID: \"7ef6223b-8ceb-4a44-b845-985899aff96b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.788203 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.794445 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.801233 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.805374 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7"] Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.812748 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl655\" (UniqueName: \"kubernetes.io/projected/8eada43a-ea1e-4565-a042-716f030ba99d-kube-api-access-bl655\") pod \"controller-manager-879f6c89f-dkr44\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.825209 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkdt7\" (UniqueName: \"kubernetes.io/projected/080a5d44-2fa6-4e44-bd77-59047f85aea9-kube-api-access-lkdt7\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjpj4\" (UID: \"080a5d44-2fa6-4e44-bd77-59047f85aea9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.839620 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.845442 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc6dx\" (UniqueName: \"kubernetes.io/projected/3052750d-7ce6-4fee-8b97-f18ea3be457d-kube-api-access-xc6dx\") pod \"machine-approver-56656f9798-jh7qb\" (UID: \"3052750d-7ce6-4fee-8b97-f18ea3be457d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.860498 4930 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 12:01:33 crc kubenswrapper[4930]: W1124 12:01:33.861103 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9dd2e0b_db34_4962_a370_03deea21911a.slice/crio-fa7ff2308201df82620d0043a81f2f6f3e44556de556240afafa231524505606 WatchSource:0}: Error finding container fa7ff2308201df82620d0043a81f2f6f3e44556de556240afafa231524505606: Status 404 returned error can't find the container with id fa7ff2308201df82620d0043a81f2f6f3e44556de556240afafa231524505606 Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.879200 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.881200 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.886453 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.937363 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.951404 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972076 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-etcd-serving-ca\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972115 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2chv\" (UniqueName: \"kubernetes.io/projected/84eba226-cf40-4011-a4a0-0cb9e774da5e-kube-api-access-l2chv\") pod \"downloads-7954f5f757-l4vrl\" (UID: \"84eba226-cf40-4011-a4a0-0cb9e774da5e\") " pod="openshift-console/downloads-7954f5f757-l4vrl" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972138 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-config\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972159 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6lt\" (UniqueName: \"kubernetes.io/projected/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-kube-api-access-4w6lt\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972196 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x64bf\" (UniqueName: \"kubernetes.io/projected/b654e8f6-b229-4515-92f7-68367ffa48a2-kube-api-access-x64bf\") pod \"cluster-samples-operator-665b6dd947-fvtkh\" (UID: \"b654e8f6-b229-4515-92f7-68367ffa48a2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972217 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d144e669-4571-4f1e-91f4-8584b50743ec-audit-dir\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972240 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6022f6c-fa48-40b0-b2c2-e74b56071b38-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972261 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-audit\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972280 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-certificates\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972310 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-image-import-ca\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972328 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-images\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972357 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8vgz\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-kube-api-access-f8vgz\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972378 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d144e669-4571-4f1e-91f4-8584b50743ec-node-pullsecrets\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972396 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d144e669-4571-4f1e-91f4-8584b50743ec-encryption-config\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972420 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6022f6c-fa48-40b0-b2c2-e74b56071b38-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972438 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972459 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b654e8f6-b229-4515-92f7-68367ffa48a2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fvtkh\" (UID: \"b654e8f6-b229-4515-92f7-68367ffa48a2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972482 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972508 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43f01bb4-4b85-4160-b8a9-8735ae78908d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l9l6\" (UID: \"43f01bb4-4b85-4160-b8a9-8735ae78908d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972532 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mh45w\" (UID: \"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972587 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43f01bb4-4b85-4160-b8a9-8735ae78908d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l9l6\" (UID: \"43f01bb4-4b85-4160-b8a9-8735ae78908d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972608 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985-config\") pod \"kube-controller-manager-operator-78b949d7b-mh45w\" (UID: \"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972629 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-config\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972648 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mh45w\" (UID: \"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972673 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2920b6e9-9296-4249-a539-f84d65e0d79c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7jz7\" (UID: \"2920b6e9-9296-4249-a539-f84d65e0d79c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972694 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43f01bb4-4b85-4160-b8a9-8735ae78908d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l9l6\" (UID: \"43f01bb4-4b85-4160-b8a9-8735ae78908d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972738 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vpzt\" (UniqueName: \"kubernetes.io/projected/d144e669-4571-4f1e-91f4-8584b50743ec-kube-api-access-8vpzt\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972767 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972789 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-trusted-ca\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972810 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjn2v\" (UniqueName: \"kubernetes.io/projected/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-kube-api-access-mjn2v\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972832 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d144e669-4571-4f1e-91f4-8584b50743ec-etcd-client\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972863 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d144e669-4571-4f1e-91f4-8584b50743ec-serving-cert\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972885 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-config\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972920 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-trusted-ca\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972952 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-bound-sa-token\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972975 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-tls\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.972997 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fx8f\" (UniqueName: \"kubernetes.io/projected/2920b6e9-9296-4249-a539-f84d65e0d79c-kube-api-access-9fx8f\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7jz7\" (UID: \"2920b6e9-9296-4249-a539-f84d65e0d79c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.973048 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2920b6e9-9296-4249-a539-f84d65e0d79c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7jz7\" (UID: \"2920b6e9-9296-4249-a539-f84d65e0d79c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.973069 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-serving-cert\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:33 crc kubenswrapper[4930]: E1124 12:01:33.973776 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:34.473761908 +0000 UTC m=+141.088089858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:33 crc kubenswrapper[4930]: I1124 12:01:33.997981 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.030341 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.064220 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.073959 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074230 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-config\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074269 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-dir\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074294 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvcd\" (UniqueName: \"kubernetes.io/projected/c8f49176-755f-460b-857a-e82ee9abd6d7-kube-api-access-ggvcd\") pod \"dns-default-4xdgm\" (UID: \"c8f49176-755f-460b-857a-e82ee9abd6d7\") " pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074331 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-trusted-ca\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074355 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnwtm\" (UniqueName: \"kubernetes.io/projected/977b7ce9-3cab-4d86-b297-e062e48195b5-kube-api-access-hnwtm\") pod \"ingress-canary-bkp4v\" (UID: \"977b7ce9-3cab-4d86-b297-e062e48195b5\") " pod="openshift-ingress-canary/ingress-canary-bkp4v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074377 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-webhook-cert\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074396 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfcpx\" (UniqueName: \"kubernetes.io/projected/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-kube-api-access-zfcpx\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074439 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e38351-809f-4f9e-9c07-7930a5db7b0b-config\") pod \"service-ca-operator-777779d784-nglrn\" (UID: \"27e38351-809f-4f9e-9c07-7930a5db7b0b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074465 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074488 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4431f8fc-1c3c-462c-ae64-b2ab77eb9d57-profile-collector-cert\") pod \"catalog-operator-68c6474976-brjtv\" (UID: \"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074510 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-tls\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074531 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8f49176-755f-460b-857a-e82ee9abd6d7-metrics-tls\") pod \"dns-default-4xdgm\" (UID: \"c8f49176-755f-460b-857a-e82ee9abd6d7\") " pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074615 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2920b6e9-9296-4249-a539-f84d65e0d79c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7jz7\" (UID: \"2920b6e9-9296-4249-a539-f84d65e0d79c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074638 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-serving-cert\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074664 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b85d7650-00f5-41a0-b862-b884dd7190cc-metrics-certs\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.074779 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b85d7650-00f5-41a0-b862-b884dd7190cc-default-certificate\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.075973 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-trusted-ca\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.076260 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-apiservice-cert\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.076296 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w6lt\" (UniqueName: \"kubernetes.io/projected/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-kube-api-access-4w6lt\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.076314 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lvqv\" (UniqueName: \"kubernetes.io/projected/b85d7650-00f5-41a0-b862-b884dd7190cc-kube-api-access-6lvqv\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.077431 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2920b6e9-9296-4249-a539-f84d65e0d79c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7jz7\" (UID: \"2920b6e9-9296-4249-a539-f84d65e0d79c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.077492 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:34.576420546 +0000 UTC m=+141.190748496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.077524 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x64bf\" (UniqueName: \"kubernetes.io/projected/b654e8f6-b229-4515-92f7-68367ffa48a2-kube-api-access-x64bf\") pod \"cluster-samples-operator-665b6dd947-fvtkh\" (UID: \"b654e8f6-b229-4515-92f7-68367ffa48a2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.077669 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d144e669-4571-4f1e-91f4-8584b50743ec-audit-dir\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.077757 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-audit\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.077776 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/977b7ce9-3cab-4d86-b297-e062e48195b5-cert\") pod \"ingress-canary-bkp4v\" (UID: \"977b7ce9-3cab-4d86-b297-e062e48195b5\") " pod="openshift-ingress-canary/ingress-canary-bkp4v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.077974 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d144e669-4571-4f1e-91f4-8584b50743ec-audit-dir\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.078421 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-audit\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.078490 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc1d3def-6313-4ed4-a518-341e82651b23-srv-cert\") pod \"olm-operator-6b444d44fb-c9vv5\" (UID: \"bc1d3def-6313-4ed4-a518-341e82651b23\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.078587 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zj2z\" (UniqueName: \"kubernetes.io/projected/fabadb7c-e637-4769-b633-ea2b745bb9e4-kube-api-access-4zj2z\") pod \"machine-config-controller-84d6567774-pt8p8\" (UID: \"fabadb7c-e637-4769-b633-ea2b745bb9e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.081435 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d144e669-4571-4f1e-91f4-8584b50743ec-node-pullsecrets\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.081460 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d144e669-4571-4f1e-91f4-8584b50743ec-encryption-config\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.081498 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6022f6c-fa48-40b0-b2c2-e74b56071b38-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.081514 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.081559 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082415 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-tmpfs\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082478 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43f01bb4-4b85-4160-b8a9-8735ae78908d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l9l6\" (UID: \"43f01bb4-4b85-4160-b8a9-8735ae78908d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082508 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082525 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f9ee2c17-9432-45c1-ad58-0c09f7d93ad2-signing-cabundle\") pod \"service-ca-9c57cc56f-6sjlm\" (UID: \"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082589 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-socket-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082685 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-secret-volume\") pod \"collect-profiles-29399760-rkv44\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082748 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-config\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082770 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mh45w\" (UID: \"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082793 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2920b6e9-9296-4249-a539-f84d65e0d79c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7jz7\" (UID: \"2920b6e9-9296-4249-a539-f84d65e0d79c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082826 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43f01bb4-4b85-4160-b8a9-8735ae78908d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l9l6\" (UID: \"43f01bb4-4b85-4160-b8a9-8735ae78908d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082848 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082866 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-config-volume\") pod \"collect-profiles-29399760-rkv44\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082884 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-csi-data-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082929 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fabadb7c-e637-4769-b633-ea2b745bb9e4-proxy-tls\") pod \"machine-config-controller-84d6567774-pt8p8\" (UID: \"fabadb7c-e637-4769-b633-ea2b745bb9e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082976 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083017 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083039 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-trusted-ca\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083060 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b85d7650-00f5-41a0-b862-b884dd7190cc-stats-auth\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083080 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdt8n\" (UniqueName: \"kubernetes.io/projected/4431f8fc-1c3c-462c-ae64-b2ab77eb9d57-kube-api-access-wdt8n\") pod \"catalog-operator-68c6474976-brjtv\" (UID: \"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083108 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083138 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-registration-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083171 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d144e669-4571-4f1e-91f4-8584b50743ec-serving-cert\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.080943 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083209 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxbqz\" (UniqueName: \"kubernetes.io/projected/4d408e31-d2da-4e32-b951-1900830ae33e-kube-api-access-lxbqz\") pod \"multus-admission-controller-857f4d67dd-h4f7j\" (UID: \"4d408e31-d2da-4e32-b951-1900830ae33e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083232 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083253 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb2ng\" (UniqueName: \"kubernetes.io/projected/f956bae9-4db9-4698-bb42-5b6c872d8b35-kube-api-access-qb2ng\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083278 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-bound-sa-token\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.081382 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-serving-cert\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.081883 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-config\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.082171 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6022f6c-fa48-40b0-b2c2-e74b56071b38-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.081779 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d144e669-4571-4f1e-91f4-8584b50743ec-node-pullsecrets\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.083760 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:34.583742329 +0000 UTC m=+141.198070279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.091431 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43f01bb4-4b85-4160-b8a9-8735ae78908d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l9l6\" (UID: \"43f01bb4-4b85-4160-b8a9-8735ae78908d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.083316 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fabadb7c-e637-4769-b633-ea2b745bb9e4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pt8p8\" (UID: \"fabadb7c-e637-4769-b633-ea2b745bb9e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.091708 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-config\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.091832 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7ss\" (UniqueName: \"kubernetes.io/projected/f9ee2c17-9432-45c1-ad58-0c09f7d93ad2-kube-api-access-nm7ss\") pod \"service-ca-9c57cc56f-6sjlm\" (UID: \"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.091957 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-plugins-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.092127 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fx8f\" (UniqueName: \"kubernetes.io/projected/2920b6e9-9296-4249-a539-f84d65e0d79c-kube-api-access-9fx8f\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7jz7\" (UID: \"2920b6e9-9296-4249-a539-f84d65e0d79c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.092325 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-mountpoint-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.093187 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d144e669-4571-4f1e-91f4-8584b50743ec-encryption-config\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.093984 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-trusted-ca\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.094334 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrnps\" (UniqueName: \"kubernetes.io/projected/27e38351-809f-4f9e-9c07-7930a5db7b0b-kube-api-access-hrnps\") pod \"service-ca-operator-777779d784-nglrn\" (UID: \"27e38351-809f-4f9e-9c07-7930a5db7b0b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.094373 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4431f8fc-1c3c-462c-ae64-b2ab77eb9d57-srv-cert\") pod \"catalog-operator-68c6474976-brjtv\" (UID: \"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.094577 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-etcd-serving-ca\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.095185 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2chv\" (UniqueName: \"kubernetes.io/projected/84eba226-cf40-4011-a4a0-0cb9e774da5e-kube-api-access-l2chv\") pod \"downloads-7954f5f757-l4vrl\" (UID: \"84eba226-cf40-4011-a4a0-0cb9e774da5e\") " pod="openshift-console/downloads-7954f5f757-l4vrl" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.095217 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7flgk\" (UniqueName: \"kubernetes.io/projected/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-kube-api-access-7flgk\") pod \"collect-profiles-29399760-rkv44\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.095243 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-config\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.095261 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7acbb8d1-6df2-4f3c-825a-d4f8104caceb-certs\") pod \"machine-config-server-65dmj\" (UID: \"7acbb8d1-6df2-4f3c-825a-d4f8104caceb\") " pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.095282 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-policies\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.095898 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-tls\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.096122 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.096211 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6022f6c-fa48-40b0-b2c2-e74b56071b38-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.096278 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc1d3def-6313-4ed4-a518-341e82651b23-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c9vv5\" (UID: \"bc1d3def-6313-4ed4-a518-341e82651b23\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.097381 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-etcd-serving-ca\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.098865 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-config\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.099350 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-certificates\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.099524 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-image-import-ca\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.099604 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-images\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.099649 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8vgz\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-kube-api-access-f8vgz\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.099677 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7acbb8d1-6df2-4f3c-825a-d4f8104caceb-node-bootstrap-token\") pod \"machine-config-server-65dmj\" (UID: \"7acbb8d1-6df2-4f3c-825a-d4f8104caceb\") " pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.100313 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b654e8f6-b229-4515-92f7-68367ffa48a2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fvtkh\" (UID: \"b654e8f6-b229-4515-92f7-68367ffa48a2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.100454 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rldpp\" (UniqueName: \"kubernetes.io/projected/44732887-85ec-4418-a663-c3a5504e926f-kube-api-access-rldpp\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.100593 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqcrf\" (UniqueName: \"kubernetes.io/projected/7acbb8d1-6df2-4f3c-825a-d4f8104caceb-kube-api-access-kqcrf\") pod \"machine-config-server-65dmj\" (UID: \"7acbb8d1-6df2-4f3c-825a-d4f8104caceb\") " pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.100714 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b85d7650-00f5-41a0-b862-b884dd7190cc-service-ca-bundle\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.101268 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d144e669-4571-4f1e-91f4-8584b50743ec-serving-cert\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.101943 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-images\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.102810 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.102859 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d144e669-4571-4f1e-91f4-8584b50743ec-image-import-ca\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.102996 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-certificates\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.103674 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mh45w\" (UID: \"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.103757 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f9ee2c17-9432-45c1-ad58-0c09f7d93ad2-signing-key\") pod \"service-ca-9c57cc56f-6sjlm\" (UID: \"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.103793 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jvq2\" (UniqueName: \"kubernetes.io/projected/bc1d3def-6313-4ed4-a518-341e82651b23-kube-api-access-6jvq2\") pod \"olm-operator-6b444d44fb-c9vv5\" (UID: \"bc1d3def-6313-4ed4-a518-341e82651b23\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.103958 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.104160 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mh45w\" (UID: \"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.104206 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43f01bb4-4b85-4160-b8a9-8735ae78908d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l9l6\" (UID: \"43f01bb4-4b85-4160-b8a9-8735ae78908d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.104243 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985-config\") pod \"kube-controller-manager-operator-78b949d7b-mh45w\" (UID: \"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.104982 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985-config\") pod \"kube-controller-manager-operator-78b949d7b-mh45w\" (UID: \"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.105038 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e38351-809f-4f9e-9c07-7930a5db7b0b-serving-cert\") pod \"service-ca-operator-777779d784-nglrn\" (UID: \"27e38351-809f-4f9e-9c07-7930a5db7b0b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.105208 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.105307 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8f49176-755f-460b-857a-e82ee9abd6d7-config-volume\") pod \"dns-default-4xdgm\" (UID: \"c8f49176-755f-460b-857a-e82ee9abd6d7\") " pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.105682 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vpzt\" (UniqueName: \"kubernetes.io/projected/d144e669-4571-4f1e-91f4-8584b50743ec-kube-api-access-8vpzt\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.105734 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjn2v\" (UniqueName: \"kubernetes.io/projected/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-kube-api-access-mjn2v\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.105772 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.105890 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d408e31-d2da-4e32-b951-1900830ae33e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h4f7j\" (UID: \"4d408e31-d2da-4e32-b951-1900830ae33e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.105976 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.106008 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d144e669-4571-4f1e-91f4-8584b50743ec-etcd-client\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.107429 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2920b6e9-9296-4249-a539-f84d65e0d79c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7jz7\" (UID: \"2920b6e9-9296-4249-a539-f84d65e0d79c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.108672 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.109976 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.110872 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b654e8f6-b229-4515-92f7-68367ffa48a2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fvtkh\" (UID: \"b654e8f6-b229-4515-92f7-68367ffa48a2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.112399 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6022f6c-fa48-40b0-b2c2-e74b56071b38-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.117715 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43f01bb4-4b85-4160-b8a9-8735ae78908d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l9l6\" (UID: \"43f01bb4-4b85-4160-b8a9-8735ae78908d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.121511 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w6lt\" (UniqueName: \"kubernetes.io/projected/d6302d88-9f2e-49a0-b1af-8d14585b6e2a-kube-api-access-4w6lt\") pod \"console-operator-58897d9998-5bktz\" (UID: \"d6302d88-9f2e-49a0-b1af-8d14585b6e2a\") " pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.132946 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d144e669-4571-4f1e-91f4-8584b50743ec-etcd-client\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.154904 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x64bf\" (UniqueName: \"kubernetes.io/projected/b654e8f6-b229-4515-92f7-68367ffa48a2-kube-api-access-x64bf\") pod \"cluster-samples-operator-665b6dd947-fvtkh\" (UID: \"b654e8f6-b229-4515-92f7-68367ffa48a2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.169926 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-bound-sa-token\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.176018 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jl279"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.178381 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fx8f\" (UniqueName: \"kubernetes.io/projected/2920b6e9-9296-4249-a539-f84d65e0d79c-kube-api-access-9fx8f\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7jz7\" (UID: \"2920b6e9-9296-4249-a539-f84d65e0d79c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.194837 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.197579 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2chv\" (UniqueName: \"kubernetes.io/projected/84eba226-cf40-4011-a4a0-0cb9e774da5e-kube-api-access-l2chv\") pod \"downloads-7954f5f757-l4vrl\" (UID: \"84eba226-cf40-4011-a4a0-0cb9e774da5e\") " pod="openshift-console/downloads-7954f5f757-l4vrl" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206592 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206806 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b85d7650-00f5-41a0-b862-b884dd7190cc-metrics-certs\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206830 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b85d7650-00f5-41a0-b862-b884dd7190cc-default-certificate\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206851 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-apiservice-cert\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206871 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lvqv\" (UniqueName: \"kubernetes.io/projected/b85d7650-00f5-41a0-b862-b884dd7190cc-kube-api-access-6lvqv\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206894 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/977b7ce9-3cab-4d86-b297-e062e48195b5-cert\") pod \"ingress-canary-bkp4v\" (UID: \"977b7ce9-3cab-4d86-b297-e062e48195b5\") " pod="openshift-ingress-canary/ingress-canary-bkp4v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206913 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc1d3def-6313-4ed4-a518-341e82651b23-srv-cert\") pod \"olm-operator-6b444d44fb-c9vv5\" (UID: \"bc1d3def-6313-4ed4-a518-341e82651b23\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206937 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zj2z\" (UniqueName: \"kubernetes.io/projected/fabadb7c-e637-4769-b633-ea2b745bb9e4-kube-api-access-4zj2z\") pod \"machine-config-controller-84d6567774-pt8p8\" (UID: \"fabadb7c-e637-4769-b633-ea2b745bb9e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206961 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206978 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-tmpfs\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.206997 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207012 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f9ee2c17-9432-45c1-ad58-0c09f7d93ad2-signing-cabundle\") pod \"service-ca-9c57cc56f-6sjlm\" (UID: \"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207027 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-socket-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207042 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-secret-volume\") pod \"collect-profiles-29399760-rkv44\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207068 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207082 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-config-volume\") pod \"collect-profiles-29399760-rkv44\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207097 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-csi-data-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207113 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fabadb7c-e637-4769-b633-ea2b745bb9e4-proxy-tls\") pod \"machine-config-controller-84d6567774-pt8p8\" (UID: \"fabadb7c-e637-4769-b633-ea2b745bb9e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207130 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207145 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdt8n\" (UniqueName: \"kubernetes.io/projected/4431f8fc-1c3c-462c-ae64-b2ab77eb9d57-kube-api-access-wdt8n\") pod \"catalog-operator-68c6474976-brjtv\" (UID: \"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207169 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b85d7650-00f5-41a0-b862-b884dd7190cc-stats-auth\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207185 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-registration-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207201 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxbqz\" (UniqueName: \"kubernetes.io/projected/4d408e31-d2da-4e32-b951-1900830ae33e-kube-api-access-lxbqz\") pod \"multus-admission-controller-857f4d67dd-h4f7j\" (UID: \"4d408e31-d2da-4e32-b951-1900830ae33e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207217 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207233 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb2ng\" (UniqueName: \"kubernetes.io/projected/f956bae9-4db9-4698-bb42-5b6c872d8b35-kube-api-access-qb2ng\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207252 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fabadb7c-e637-4769-b633-ea2b745bb9e4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pt8p8\" (UID: \"fabadb7c-e637-4769-b633-ea2b745bb9e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207267 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm7ss\" (UniqueName: \"kubernetes.io/projected/f9ee2c17-9432-45c1-ad58-0c09f7d93ad2-kube-api-access-nm7ss\") pod \"service-ca-9c57cc56f-6sjlm\" (UID: \"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207285 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-plugins-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207301 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-mountpoint-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207316 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrnps\" (UniqueName: \"kubernetes.io/projected/27e38351-809f-4f9e-9c07-7930a5db7b0b-kube-api-access-hrnps\") pod \"service-ca-operator-777779d784-nglrn\" (UID: \"27e38351-809f-4f9e-9c07-7930a5db7b0b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207330 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4431f8fc-1c3c-462c-ae64-b2ab77eb9d57-srv-cert\") pod \"catalog-operator-68c6474976-brjtv\" (UID: \"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207348 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7flgk\" (UniqueName: \"kubernetes.io/projected/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-kube-api-access-7flgk\") pod \"collect-profiles-29399760-rkv44\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207364 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7acbb8d1-6df2-4f3c-825a-d4f8104caceb-certs\") pod \"machine-config-server-65dmj\" (UID: \"7acbb8d1-6df2-4f3c-825a-d4f8104caceb\") " pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207378 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-policies\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207394 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207409 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc1d3def-6313-4ed4-a518-341e82651b23-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c9vv5\" (UID: \"bc1d3def-6313-4ed4-a518-341e82651b23\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207430 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7acbb8d1-6df2-4f3c-825a-d4f8104caceb-node-bootstrap-token\") pod \"machine-config-server-65dmj\" (UID: \"7acbb8d1-6df2-4f3c-825a-d4f8104caceb\") " pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207445 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rldpp\" (UniqueName: \"kubernetes.io/projected/44732887-85ec-4418-a663-c3a5504e926f-kube-api-access-rldpp\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207463 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqcrf\" (UniqueName: \"kubernetes.io/projected/7acbb8d1-6df2-4f3c-825a-d4f8104caceb-kube-api-access-kqcrf\") pod \"machine-config-server-65dmj\" (UID: \"7acbb8d1-6df2-4f3c-825a-d4f8104caceb\") " pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207477 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b85d7650-00f5-41a0-b862-b884dd7190cc-service-ca-bundle\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207497 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f9ee2c17-9432-45c1-ad58-0c09f7d93ad2-signing-key\") pod \"service-ca-9c57cc56f-6sjlm\" (UID: \"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207511 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jvq2\" (UniqueName: \"kubernetes.io/projected/bc1d3def-6313-4ed4-a518-341e82651b23-kube-api-access-6jvq2\") pod \"olm-operator-6b444d44fb-c9vv5\" (UID: \"bc1d3def-6313-4ed4-a518-341e82651b23\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207527 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207565 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e38351-809f-4f9e-9c07-7930a5db7b0b-serving-cert\") pod \"service-ca-operator-777779d784-nglrn\" (UID: \"27e38351-809f-4f9e-9c07-7930a5db7b0b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207582 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8f49176-755f-460b-857a-e82ee9abd6d7-config-volume\") pod \"dns-default-4xdgm\" (UID: \"c8f49176-755f-460b-857a-e82ee9abd6d7\") " pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207611 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207637 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207653 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d408e31-d2da-4e32-b951-1900830ae33e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h4f7j\" (UID: \"4d408e31-d2da-4e32-b951-1900830ae33e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207668 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207692 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-dir\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207707 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggvcd\" (UniqueName: \"kubernetes.io/projected/c8f49176-755f-460b-857a-e82ee9abd6d7-kube-api-access-ggvcd\") pod \"dns-default-4xdgm\" (UID: \"c8f49176-755f-460b-857a-e82ee9abd6d7\") " pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207723 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e38351-809f-4f9e-9c07-7930a5db7b0b-config\") pod \"service-ca-operator-777779d784-nglrn\" (UID: \"27e38351-809f-4f9e-9c07-7930a5db7b0b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207739 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnwtm\" (UniqueName: \"kubernetes.io/projected/977b7ce9-3cab-4d86-b297-e062e48195b5-kube-api-access-hnwtm\") pod \"ingress-canary-bkp4v\" (UID: \"977b7ce9-3cab-4d86-b297-e062e48195b5\") " pod="openshift-ingress-canary/ingress-canary-bkp4v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207752 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-webhook-cert\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207774 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfcpx\" (UniqueName: \"kubernetes.io/projected/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-kube-api-access-zfcpx\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207794 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207810 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4431f8fc-1c3c-462c-ae64-b2ab77eb9d57-profile-collector-cert\") pod \"catalog-operator-68c6474976-brjtv\" (UID: \"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.207824 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8f49176-755f-460b-857a-e82ee9abd6d7-metrics-tls\") pod \"dns-default-4xdgm\" (UID: \"c8f49176-755f-460b-857a-e82ee9abd6d7\") " pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.208334 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.208749 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-tmpfs\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.214241 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.214323 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8f49176-755f-460b-857a-e82ee9abd6d7-metrics-tls\") pod \"dns-default-4xdgm\" (UID: \"c8f49176-755f-460b-857a-e82ee9abd6d7\") " pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.214937 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.214988 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-dir\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.215039 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-config-volume\") pod \"collect-profiles-29399760-rkv44\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.215633 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e38351-809f-4f9e-9c07-7930a5db7b0b-config\") pod \"service-ca-operator-777779d784-nglrn\" (UID: \"27e38351-809f-4f9e-9c07-7930a5db7b0b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.215719 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-csi-data-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.216157 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-registration-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.216441 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b85d7650-00f5-41a0-b862-b884dd7190cc-service-ca-bundle\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.216892 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-socket-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.218503 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fabadb7c-e637-4769-b633-ea2b745bb9e4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pt8p8\" (UID: \"fabadb7c-e637-4769-b633-ea2b745bb9e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.218817 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:34.71877188 +0000 UTC m=+141.333099830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.219477 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.220421 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-policies\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.220899 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-plugins-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.222178 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/44732887-85ec-4418-a663-c3a5504e926f-mountpoint-dir\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.222199 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8f49176-755f-460b-857a-e82ee9abd6d7-config-volume\") pod \"dns-default-4xdgm\" (UID: \"c8f49176-755f-460b-857a-e82ee9abd6d7\") " pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.223512 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f9ee2c17-9432-45c1-ad58-0c09f7d93ad2-signing-cabundle\") pod \"service-ca-9c57cc56f-6sjlm\" (UID: \"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.224236 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.224495 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e38351-809f-4f9e-9c07-7930a5db7b0b-serving-cert\") pod \"service-ca-operator-777779d784-nglrn\" (UID: \"27e38351-809f-4f9e-9c07-7930a5db7b0b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.224709 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-apiservice-cert\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.224741 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.224868 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.225170 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b85d7650-00f5-41a0-b862-b884dd7190cc-metrics-certs\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.229168 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b85d7650-00f5-41a0-b862-b884dd7190cc-default-certificate\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.232386 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d408e31-d2da-4e32-b951-1900830ae33e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h4f7j\" (UID: \"4d408e31-d2da-4e32-b951-1900830ae33e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.232467 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.232588 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f9ee2c17-9432-45c1-ad58-0c09f7d93ad2-signing-key\") pod \"service-ca-9c57cc56f-6sjlm\" (UID: \"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.232792 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.232859 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-secret-volume\") pod \"collect-profiles-29399760-rkv44\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.233092 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fabadb7c-e637-4769-b633-ea2b745bb9e4-proxy-tls\") pod \"machine-config-controller-84d6567774-pt8p8\" (UID: \"fabadb7c-e637-4769-b633-ea2b745bb9e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.233334 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/977b7ce9-3cab-4d86-b297-e062e48195b5-cert\") pod \"ingress-canary-bkp4v\" (UID: \"977b7ce9-3cab-4d86-b297-e062e48195b5\") " pod="openshift-ingress-canary/ingress-canary-bkp4v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.238962 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7acbb8d1-6df2-4f3c-825a-d4f8104caceb-node-bootstrap-token\") pod \"machine-config-server-65dmj\" (UID: \"7acbb8d1-6df2-4f3c-825a-d4f8104caceb\") " pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.240626 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.241807 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4431f8fc-1c3c-462c-ae64-b2ab77eb9d57-srv-cert\") pod \"catalog-operator-68c6474976-brjtv\" (UID: \"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.244903 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc1d3def-6313-4ed4-a518-341e82651b23-srv-cert\") pod \"olm-operator-6b444d44fb-c9vv5\" (UID: \"bc1d3def-6313-4ed4-a518-341e82651b23\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.244900 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-webhook-cert\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.244904 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc1d3def-6313-4ed4-a518-341e82651b23-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c9vv5\" (UID: \"bc1d3def-6313-4ed4-a518-341e82651b23\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.245640 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.247235 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8vgz\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-kube-api-access-f8vgz\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.247807 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4431f8fc-1c3c-462c-ae64-b2ab77eb9d57-profile-collector-cert\") pod \"catalog-operator-68c6474976-brjtv\" (UID: \"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.248355 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7acbb8d1-6df2-4f3c-825a-d4f8104caceb-certs\") pod \"machine-config-server-65dmj\" (UID: \"7acbb8d1-6df2-4f3c-825a-d4f8104caceb\") " pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.250056 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b85d7650-00f5-41a0-b862-b884dd7190cc-stats-auth\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.259285 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mh45w\" (UID: \"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.282058 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43f01bb4-4b85-4160-b8a9-8735ae78908d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2l9l6\" (UID: \"43f01bb4-4b85-4160-b8a9-8735ae78908d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.288437 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.303943 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjn2v\" (UniqueName: \"kubernetes.io/projected/28bc15a8-f8ed-4595-8a4f-e0d9e895c085-kube-api-access-mjn2v\") pod \"machine-api-operator-5694c8668f-kw8wv\" (UID: \"28bc15a8-f8ed-4595-8a4f-e0d9e895c085\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.309628 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.310059 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:34.810045057 +0000 UTC m=+141.424373007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.313149 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l4vrl" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.349472 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.352447 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vpzt\" (UniqueName: \"kubernetes.io/projected/d144e669-4571-4f1e-91f4-8584b50743ec-kube-api-access-8vpzt\") pod \"apiserver-76f77b778f-xhvvt\" (UID: \"d144e669-4571-4f1e-91f4-8584b50743ec\") " pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.365806 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rldpp\" (UniqueName: \"kubernetes.io/projected/44732887-85ec-4418-a663-c3a5504e926f-kube-api-access-rldpp\") pod \"csi-hostpathplugin-b6v7j\" (UID: \"44732887-85ec-4418-a663-c3a5504e926f\") " pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.378922 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z7lsz"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.384962 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdt8n\" (UniqueName: \"kubernetes.io/projected/4431f8fc-1c3c-462c-ae64-b2ab77eb9d57-kube-api-access-wdt8n\") pod \"catalog-operator-68c6474976-brjtv\" (UID: \"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.393889 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qfqmk"] Nov 24 12:01:34 crc kubenswrapper[4930]: W1124 12:01:34.409939 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod507084c7_1280_4943_bff6_497f1dc21c0a.slice/crio-e55dba9c5bcc5f7cc9f36396f056ff999bf57c283ab1d260840fdefc0a610c4e WatchSource:0}: Error finding container e55dba9c5bcc5f7cc9f36396f056ff999bf57c283ab1d260840fdefc0a610c4e: Status 404 returned error can't find the container with id e55dba9c5bcc5f7cc9f36396f056ff999bf57c283ab1d260840fdefc0a610c4e Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.410744 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.411415 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:34.911387988 +0000 UTC m=+141.525715938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.412837 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lvqv\" (UniqueName: \"kubernetes.io/projected/b85d7650-00f5-41a0-b862-b884dd7190cc-kube-api-access-6lvqv\") pod \"router-default-5444994796-jkm8r\" (UID: \"b85d7650-00f5-41a0-b862-b884dd7190cc\") " pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: W1124 12:01:34.425789 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f9bbba_9d3d_4aec_8d86_e98fde2606ca.slice/crio-e2bf686a65231b067a24d5ca2f38822288b369e5aa222bddc3c837f565c702ad WatchSource:0}: Error finding container e2bf686a65231b067a24d5ca2f38822288b369e5aa222bddc3c837f565c702ad: Status 404 returned error can't find the container with id e2bf686a65231b067a24d5ca2f38822288b369e5aa222bddc3c837f565c702ad Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.434108 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zj2z\" (UniqueName: \"kubernetes.io/projected/fabadb7c-e637-4769-b633-ea2b745bb9e4-kube-api-access-4zj2z\") pod \"machine-config-controller-84d6567774-pt8p8\" (UID: \"fabadb7c-e637-4769-b633-ea2b745bb9e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.440756 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.445422 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggvcd\" (UniqueName: \"kubernetes.io/projected/c8f49176-755f-460b-857a-e82ee9abd6d7-kube-api-access-ggvcd\") pod \"dns-default-4xdgm\" (UID: \"c8f49176-755f-460b-857a-e82ee9abd6d7\") " pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.451487 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.465841 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.466006 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.472211 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.472270 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnwtm\" (UniqueName: \"kubernetes.io/projected/977b7ce9-3cab-4d86-b297-e062e48195b5-kube-api-access-hnwtm\") pod \"ingress-canary-bkp4v\" (UID: \"977b7ce9-3cab-4d86-b297-e062e48195b5\") " pod="openshift-ingress-canary/ingress-canary-bkp4v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.491508 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqcrf\" (UniqueName: \"kubernetes.io/projected/7acbb8d1-6df2-4f3c-825a-d4f8104caceb-kube-api-access-kqcrf\") pod \"machine-config-server-65dmj\" (UID: \"7acbb8d1-6df2-4f3c-825a-d4f8104caceb\") " pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.492518 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.500271 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m48vx"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.505847 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrnps\" (UniqueName: \"kubernetes.io/projected/27e38351-809f-4f9e-9c07-7930a5db7b0b-kube-api-access-hrnps\") pod \"service-ca-operator-777779d784-nglrn\" (UID: \"27e38351-809f-4f9e-9c07-7930a5db7b0b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.506611 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.514033 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.515613 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.516248 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.016220348 +0000 UTC m=+141.630548288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.517078 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.526447 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.530269 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxbqz\" (UniqueName: \"kubernetes.io/projected/4d408e31-d2da-4e32-b951-1900830ae33e-kube-api-access-lxbqz\") pod \"multus-admission-controller-857f4d67dd-h4f7j\" (UID: \"4d408e31-d2da-4e32-b951-1900830ae33e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.547116 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm7ss\" (UniqueName: \"kubernetes.io/projected/f9ee2c17-9432-45c1-ad58-0c09f7d93ad2-kube-api-access-nm7ss\") pod \"service-ca-9c57cc56f-6sjlm\" (UID: \"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.566020 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.569708 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb2ng\" (UniqueName: \"kubernetes.io/projected/f956bae9-4db9-4698-bb42-5b6c872d8b35-kube-api-access-qb2ng\") pod \"oauth-openshift-558db77b4-9d78l\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.576577 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7flgk\" (UniqueName: \"kubernetes.io/projected/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-kube-api-access-7flgk\") pod \"collect-profiles-29399760-rkv44\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.577961 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.590095 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.596291 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.598111 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jvq2\" (UniqueName: \"kubernetes.io/projected/bc1d3def-6313-4ed4-a518-341e82651b23-kube-api-access-6jvq2\") pod \"olm-operator-6b444d44fb-c9vv5\" (UID: \"bc1d3def-6313-4ed4-a518-341e82651b23\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.605280 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bkp4v" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.612391 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-65dmj" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.618164 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfcpx\" (UniqueName: \"kubernetes.io/projected/53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3-kube-api-access-zfcpx\") pod \"packageserver-d55dfcdfc-q5hrn\" (UID: \"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.621105 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.621387 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.121353599 +0000 UTC m=+141.735681549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.623413 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.623967 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.123947645 +0000 UTC m=+141.738275595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.645985 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.701016 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-226nn"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.707698 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.727303 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.727396 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.227371145 +0000 UTC m=+141.841699095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.743802 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.744317 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.244303058 +0000 UTC m=+141.858631008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.747117 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh578"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.817012 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qfqmk" event={"ID":"507084c7-1280-4943-bff6-497f1dc21c0a","Type":"ContainerStarted","Data":"e55dba9c5bcc5f7cc9f36396f056ff999bf57c283ab1d260840fdefc0a610c4e"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.818478 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.823140 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" event={"ID":"1764ecb0-77fe-4ff4-9106-6860622b2491","Type":"ContainerStarted","Data":"751c9fabcb0fa971fd4a6885ae7c2854bcc10db42e425e735788d254de341168"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.823180 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" event={"ID":"1764ecb0-77fe-4ff4-9106-6860622b2491","Type":"ContainerStarted","Data":"2c1c7165ae06ead811aae9cd199117f2b7fd7260b3155460ddee4b3a0179fc7f"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.823348 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dkr44"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.825663 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.832403 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l4vrl"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.835243 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:34 crc kubenswrapper[4930]: W1124 12:01:34.835365 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b68a023_c6c2_458f_a714_a084b12a83cc.slice/crio-0323ebc1a6d43512f7d9952c78cd75ed65e4c9b0e0408db7e1c459eb62357257 WatchSource:0}: Error finding container 0323ebc1a6d43512f7d9952c78cd75ed65e4c9b0e0408db7e1c459eb62357257: Status 404 returned error can't find the container with id 0323ebc1a6d43512f7d9952c78cd75ed65e4c9b0e0408db7e1c459eb62357257 Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.837686 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" event={"ID":"084904b5-0321-4d4f-b26a-48c5950a5d98","Type":"ContainerStarted","Data":"3a24e20530df0d7fd1f1c632725fbdfba33882083ea15cf4eb839d7ea8c3ccfb"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.837733 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" event={"ID":"084904b5-0321-4d4f-b26a-48c5950a5d98","Type":"ContainerStarted","Data":"aeeba438c9fd161ad7dbf59ff4e46a93291a1abbb99c7963d1c94010c09d7f32"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.837744 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" event={"ID":"084904b5-0321-4d4f-b26a-48c5950a5d98","Type":"ContainerStarted","Data":"3c791a2ee970ca12dfabb90c47fc14a9fc83511215fb3781c1b7a6be37d0aaad"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.837903 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.840298 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.843510 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" event={"ID":"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca","Type":"ContainerStarted","Data":"e2bf686a65231b067a24d5ca2f38822288b369e5aa222bddc3c837f565c702ad"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.844831 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.849643 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.349620274 +0000 UTC m=+141.963948224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.860135 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.860422 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.870084 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" event={"ID":"3052750d-7ce6-4fee-8b97-f18ea3be457d","Type":"ContainerStarted","Data":"210e26f4d787e2c0af9518f12a5a04d95ba0b6d142bb5343b94650e15db60804"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.870131 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" event={"ID":"3052750d-7ce6-4fee-8b97-f18ea3be457d","Type":"ContainerStarted","Data":"cba1fb8772bf4f021fdac9ebcb2ebc1182388806c685b109de6e1d238e466e28"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.872566 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5bktz"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.873606 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s42xf"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.874595 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" event={"ID":"71dbe23d-480e-43aa-8106-b19ae5b98734","Type":"ContainerStarted","Data":"576f5007f19efd5948557c3fd75adb95351cca8c9e0f3ff17ae3cf77e12afad4"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.884919 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j"] Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.890821 4930 generic.go:334] "Generic (PLEG): container finished" podID="f9dd2e0b-db34-4962-a370-03deea21911a" containerID="74b98633a69e9d9e88a28739706be6de4fa32a43f0b77f4f8a11937fcbd988c0" exitCode=0 Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.891192 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" event={"ID":"f9dd2e0b-db34-4962-a370-03deea21911a","Type":"ContainerDied","Data":"74b98633a69e9d9e88a28739706be6de4fa32a43f0b77f4f8a11937fcbd988c0"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.891257 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" event={"ID":"f9dd2e0b-db34-4962-a370-03deea21911a","Type":"ContainerStarted","Data":"fa7ff2308201df82620d0043a81f2f6f3e44556de556240afafa231524505606"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.894466 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" event={"ID":"4e779576-9402-40c2-bdf6-a62360dc60b3","Type":"ContainerStarted","Data":"9992e5fd08268ecdce34354860583fb175acd55c49456277b90489fe18986877"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.898520 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" event={"ID":"3532e932-4436-4950-8f1d-b622a393356e","Type":"ContainerStarted","Data":"15a8f8bd54857410f314bef8de588ad54fad6c5c10dbf2f5f52dfa2833a37973"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.949908 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:34 crc kubenswrapper[4930]: E1124 12:01:34.951936 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.451917192 +0000 UTC m=+142.066245232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.967219 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" event={"ID":"cc9af663-c7f1-485e-a7fc-709da901e9e1","Type":"ContainerStarted","Data":"b72754d3f35f74e4d566174b9c2f4ae4dc57045f1ad277761d0426e5bd1d06cd"} Nov 24 12:01:34 crc kubenswrapper[4930]: I1124 12:01:34.967285 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" event={"ID":"cc9af663-c7f1-485e-a7fc-709da901e9e1","Type":"ContainerStarted","Data":"d2d46b5e38f14b2413346b3ae0b83e6aaaceabf0a354a001f9039f6404cb3c98"} Nov 24 12:01:35 crc kubenswrapper[4930]: W1124 12:01:35.013606 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb85d7650_00f5_41a0_b862_b884dd7190cc.slice/crio-8031d6a8f4d056311690f9676c1b66bd65ca15e2b8f9d71442e39ca5693e9635 WatchSource:0}: Error finding container 8031d6a8f4d056311690f9676c1b66bd65ca15e2b8f9d71442e39ca5693e9635: Status 404 returned error can't find the container with id 8031d6a8f4d056311690f9676c1b66bd65ca15e2b8f9d71442e39ca5693e9635 Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.044811 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh"] Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.051107 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.052606 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.552582543 +0000 UTC m=+142.166910493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.055630 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xhvvt"] Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.153106 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.153122 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.653104269 +0000 UTC m=+142.267432219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.255241 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.257093 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.757053745 +0000 UTC m=+142.371381695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.270059 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.270843 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.770810206 +0000 UTC m=+142.385138156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.284928 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4"] Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.287231 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kw8wv"] Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.372767 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.872739433 +0000 UTC m=+142.487067383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.371308 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.373157 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.375105 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.875086262 +0000 UTC m=+142.489414212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.464695 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w"] Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.467735 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7"] Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.474201 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv"] Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.477378 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.477679 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.977608636 +0000 UTC m=+142.591936576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.478318 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.478877 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:35.978853422 +0000 UTC m=+142.593181362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.515139 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6"] Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.545861 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtd74" podStartSLOduration=121.545839792 podStartE2EDuration="2m1.545839792s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:35.529490867 +0000 UTC m=+142.143818817" watchObservedRunningTime="2025-11-24 12:01:35.545839792 +0000 UTC m=+142.160167742" Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.589336 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.589861 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.089801682 +0000 UTC m=+142.704129632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.693656 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.694100 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.194081948 +0000 UTC m=+142.808409898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.795022 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.795245 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.295207432 +0000 UTC m=+142.909535392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.799856 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.800275 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.300255999 +0000 UTC m=+142.914583949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: W1124 12:01:35.863787 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4431f8fc_1c3c_462c_ae64_b2ab77eb9d57.slice/crio-8828e0e90d0938a0dcaee6978e16d03f96cde51155cdd420cd5c25112156fa35 WatchSource:0}: Error finding container 8828e0e90d0938a0dcaee6978e16d03f96cde51155cdd420cd5c25112156fa35: Status 404 returned error can't find the container with id 8828e0e90d0938a0dcaee6978e16d03f96cde51155cdd420cd5c25112156fa35 Nov 24 12:01:35 crc kubenswrapper[4930]: W1124 12:01:35.868738 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2920b6e9_9296_4249_a539_f84d65e0d79c.slice/crio-161a4459690ecd148b56d0da2269b032d2f517add4fc2c6fe115f70755f0edc0 WatchSource:0}: Error finding container 161a4459690ecd148b56d0da2269b032d2f517add4fc2c6fe115f70755f0edc0: Status 404 returned error can't find the container with id 161a4459690ecd148b56d0da2269b032d2f517add4fc2c6fe115f70755f0edc0 Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.901812 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.902146 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.402122414 +0000 UTC m=+143.016450364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:35 crc kubenswrapper[4930]: I1124 12:01:35.902630 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:35 crc kubenswrapper[4930]: E1124 12:01:35.903093 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.403078822 +0000 UTC m=+143.017406772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.004995 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.005466 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.505445932 +0000 UTC m=+143.119773882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.020898 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8" event={"ID":"9b68a023-c6c2-458f-a714-a084b12a83cc","Type":"ContainerStarted","Data":"0323ebc1a6d43512f7d9952c78cd75ed65e4c9b0e0408db7e1c459eb62357257"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.045589 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" event={"ID":"fed4ab08-54d0-4526-bd9a-3d1e660fc31a","Type":"ContainerStarted","Data":"7a05b8d720027524934d482317b70875988c970ba90db0724f824f127903fc0d"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.071263 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5bktz" event={"ID":"d6302d88-9f2e-49a0-b1af-8d14585b6e2a","Type":"ContainerStarted","Data":"0fa10cfe162d5a1839e7f5b9e7eba16ca793abd66374c7d1b8798720f21aba19"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.107113 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.107500 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.607488033 +0000 UTC m=+143.221815983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.112044 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" event={"ID":"6639778e-480a-4822-90cb-48d2e976d509","Type":"ContainerStarted","Data":"0110705196cc64dc83207fefe9314eb7e7c84b6c1506948a521e1539528450c5"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.113439 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" event={"ID":"d144e669-4571-4f1e-91f4-8584b50743ec","Type":"ContainerStarted","Data":"83288719cf347bc6f1bdbadde68b068e2e6e0ee8ba8ba2299667635e0ffad20b"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.123320 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" event={"ID":"3532e932-4436-4950-8f1d-b622a393356e","Type":"ContainerStarted","Data":"cdc46599f272e8a0452eecd0d43dc07e2f39d80275d648de877c89e5aeb0c53a"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.129254 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qfqmk" event={"ID":"507084c7-1280-4943-bff6-497f1dc21c0a","Type":"ContainerStarted","Data":"b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.147014 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" event={"ID":"28bc15a8-f8ed-4595-8a4f-e0d9e895c085","Type":"ContainerStarted","Data":"eb70ed5a53355cce79f0bad93b162a5d431bec42c19613face4d1c437c6b7616"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.149365 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" podStartSLOduration=121.149323111 podStartE2EDuration="2m1.149323111s" podCreationTimestamp="2025-11-24 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:36.146264972 +0000 UTC m=+142.760592922" watchObservedRunningTime="2025-11-24 12:01:36.149323111 +0000 UTC m=+142.763651061" Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.161557 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" event={"ID":"8eada43a-ea1e-4565-a042-716f030ba99d","Type":"ContainerStarted","Data":"2738cce9c251a8602b9d22c33a0d9707b77747b851118f319e5c9f008e0dc0a0"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.166646 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jkm8r" event={"ID":"b85d7650-00f5-41a0-b862-b884dd7190cc","Type":"ContainerStarted","Data":"8031d6a8f4d056311690f9676c1b66bd65ca15e2b8f9d71442e39ca5693e9635"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.170297 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" event={"ID":"71dbe23d-480e-43aa-8106-b19ae5b98734","Type":"ContainerStarted","Data":"828aa3d64b62214a4cd6865a655fe412f624e945aba043164570d2c21bf1fc20"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.176951 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" event={"ID":"f8908bf3-e171-4859-80c7-baa64ca6e11c","Type":"ContainerStarted","Data":"de60c5a8f637803b90e4e8a93dc81997568f37fede47336773231935da2bde6b"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.181984 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" event={"ID":"e6f9bbba-9d3d-4aec-8d86-e98fde2606ca","Type":"ContainerStarted","Data":"ef8d63757cf322d3dedf11f40636580545a147d2c61d862c106b753522cdd065"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.187804 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" event={"ID":"3052750d-7ce6-4fee-8b97-f18ea3be457d","Type":"ContainerStarted","Data":"f006b36367dd35c4309d58ade9f4bc4f012dc2a829d25d60d8eaec4c0df549ef"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.190939 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" event={"ID":"b1ab09e9-91ba-481e-b364-12e2a90bed8e","Type":"ContainerStarted","Data":"6a8b55c18d0c55389e5d99206f66a18c585a3aa5f192d4a1551a657c64c454e3"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.213368 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" event={"ID":"2920b6e9-9296-4249-a539-f84d65e0d79c","Type":"ContainerStarted","Data":"161a4459690ecd148b56d0da2269b032d2f517add4fc2c6fe115f70755f0edc0"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.213477 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.213722 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.713696245 +0000 UTC m=+143.328024195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.214211 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.215603 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.71558971 +0000 UTC m=+143.329917730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.238717 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" event={"ID":"7ef6223b-8ceb-4a44-b845-985899aff96b","Type":"ContainerStarted","Data":"ad3c9d79be330357f1300786bec8f4351a720c9d4c492da3510bbae97ab857de"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.251095 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" event={"ID":"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985","Type":"ContainerStarted","Data":"f71c563f393cee4f967bc507ed5f66d7db2b7072a8838bbb679c192ac941e7ba"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.273652 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-65dmj" event={"ID":"7acbb8d1-6df2-4f3c-825a-d4f8104caceb","Type":"ContainerStarted","Data":"cd2f8daf926b3011580d534466d86bafcaeebbd2f707e01b4783bdc95940927c"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.275696 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" event={"ID":"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57","Type":"ContainerStarted","Data":"8828e0e90d0938a0dcaee6978e16d03f96cde51155cdd420cd5c25112156fa35"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.276595 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" event={"ID":"080a5d44-2fa6-4e44-bd77-59047f85aea9","Type":"ContainerStarted","Data":"c969ed66a71f49e53fddf03549c9e6b16fe76326b8c9231bb3c95f94e46ad0dc"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.277411 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" event={"ID":"43f01bb4-4b85-4160-b8a9-8735ae78908d","Type":"ContainerStarted","Data":"9a45cc18ff484611613dae8d372c0028cdeab980009005e8a6161c5b0e878d35"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.284673 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" event={"ID":"b654e8f6-b229-4515-92f7-68367ffa48a2","Type":"ContainerStarted","Data":"93cbf33eab6ad45536bb9202cc7539072e1ae52032d77bae8d42465192fdd99d"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.294440 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qfqmk" podStartSLOduration=122.294414455 podStartE2EDuration="2m2.294414455s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:36.260208419 +0000 UTC m=+142.874536369" watchObservedRunningTime="2025-11-24 12:01:36.294414455 +0000 UTC m=+142.908742395" Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.315443 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.315738 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.815710005 +0000 UTC m=+143.430037965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.315934 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.316760 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.816746895 +0000 UTC m=+143.431074845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.327651 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" event={"ID":"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb","Type":"ContainerStarted","Data":"f9ebd3d6dc146aebc9bc044c74f3a07d13989de80198a2375cae26935e215f82"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.330484 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bkp4v"] Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.375431 4930 generic.go:334] "Generic (PLEG): container finished" podID="cc9af663-c7f1-485e-a7fc-709da901e9e1" containerID="b72754d3f35f74e4d566174b9c2f4ae4dc57045f1ad277761d0426e5bd1d06cd" exitCode=0 Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.375521 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" event={"ID":"cc9af663-c7f1-485e-a7fc-709da901e9e1","Type":"ContainerDied","Data":"b72754d3f35f74e4d566174b9c2f4ae4dc57045f1ad277761d0426e5bd1d06cd"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.393686 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" event={"ID":"4e779576-9402-40c2-bdf6-a62360dc60b3","Type":"ContainerStarted","Data":"67961a0a575a5f02f397f6c9c04fb3a60e78e6ffaaa2c2299af661769b85cddb"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.410005 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l4vrl" event={"ID":"84eba226-cf40-4011-a4a0-0cb9e774da5e","Type":"ContainerStarted","Data":"f82aafae15aa459a2d3f3f3a9dbce2e60f1a7857f0b72143105dc5c17afd23d1"} Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.416752 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.417037 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:36.917023494 +0000 UTC m=+143.531351444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.488012 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn"] Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.518456 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.520153 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.020138926 +0000 UTC m=+143.634466876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.534282 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44"] Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.537116 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nglrn"] Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.595680 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8"] Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.619731 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.619846 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.119826158 +0000 UTC m=+143.734154108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.620098 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.620493 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.120482807 +0000 UTC m=+143.734810757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.626399 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4xdgm"] Nov 24 12:01:36 crc kubenswrapper[4930]: W1124 12:01:36.628674 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfabadb7c_e637_4769_b633_ea2b745bb9e4.slice/crio-4dfadb5ca14d4f1b2b2c6ef911218dbcd4f700b90ad03e985a8af2aeceff42da WatchSource:0}: Error finding container 4dfadb5ca14d4f1b2b2c6ef911218dbcd4f700b90ad03e985a8af2aeceff42da: Status 404 returned error can't find the container with id 4dfadb5ca14d4f1b2b2c6ef911218dbcd4f700b90ad03e985a8af2aeceff42da Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.641151 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9d78l"] Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.651057 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b6v7j"] Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.660947 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h4f7j"] Nov 24 12:01:36 crc kubenswrapper[4930]: W1124 12:01:36.671930 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8f49176_755f_460b_857a_e82ee9abd6d7.slice/crio-28191f22387aadb73120d2c7fea15c0bab08c712e7712831830911bca3f522a1 WatchSource:0}: Error finding container 28191f22387aadb73120d2c7fea15c0bab08c712e7712831830911bca3f522a1: Status 404 returned error can't find the container with id 28191f22387aadb73120d2c7fea15c0bab08c712e7712831830911bca3f522a1 Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.677157 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6sjlm"] Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.717296 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5"] Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.718814 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-m48vx" podStartSLOduration=122.718797119 podStartE2EDuration="2m2.718797119s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:36.648333968 +0000 UTC m=+143.262661928" watchObservedRunningTime="2025-11-24 12:01:36.718797119 +0000 UTC m=+143.333125069" Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.723654 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.734941 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.234899578 +0000 UTC m=+143.849227538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.750364 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jh7qb" podStartSLOduration=123.750343138 podStartE2EDuration="2m3.750343138s" podCreationTimestamp="2025-11-24 11:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:36.746391503 +0000 UTC m=+143.360719453" watchObservedRunningTime="2025-11-24 12:01:36.750343138 +0000 UTC m=+143.364671088" Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.779372 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lpjnn" podStartSLOduration=122.779348652 podStartE2EDuration="2m2.779348652s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:36.771596996 +0000 UTC m=+143.385924946" watchObservedRunningTime="2025-11-24 12:01:36.779348652 +0000 UTC m=+143.393676602" Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.829110 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-z7lsz" podStartSLOduration=122.82908516 podStartE2EDuration="2m2.82908516s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:36.823080645 +0000 UTC m=+143.437408595" watchObservedRunningTime="2025-11-24 12:01:36.82908516 +0000 UTC m=+143.443413110" Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.837720 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.838156 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.338141034 +0000 UTC m=+143.952468984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.941660 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.941832 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.441802381 +0000 UTC m=+144.056130321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:36 crc kubenswrapper[4930]: I1124 12:01:36.942360 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:36 crc kubenswrapper[4930]: E1124 12:01:36.943827 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.442862882 +0000 UTC m=+144.057190992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.043812 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.044129 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.544087619 +0000 UTC m=+144.158415569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.044722 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.045297 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.545278324 +0000 UTC m=+144.159606274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.147501 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.147719 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.647673634 +0000 UTC m=+144.262001584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.154175 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.155782 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.65576317 +0000 UTC m=+144.270091120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.259902 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.260307 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.760289373 +0000 UTC m=+144.374617323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.371384 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.371845 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.87182281 +0000 UTC m=+144.486150810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.472452 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.472980 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:37.972965514 +0000 UTC m=+144.587293464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.478100 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" event={"ID":"fabadb7c-e637-4769-b633-ea2b745bb9e4","Type":"ContainerStarted","Data":"4dfadb5ca14d4f1b2b2c6ef911218dbcd4f700b90ad03e985a8af2aeceff42da"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.485028 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" event={"ID":"6639778e-480a-4822-90cb-48d2e976d509","Type":"ContainerStarted","Data":"b3ea897a724ee32ae2845378755b4dee5fbe273002d63ada9e42c045d5c0cb30"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.488182 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" event={"ID":"43f01bb4-4b85-4160-b8a9-8735ae78908d","Type":"ContainerStarted","Data":"fceb66de00f9fe72b9a2b665db4a197c93afa73005a043195c294152a9e17891"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.491041 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8" event={"ID":"9b68a023-c6c2-458f-a714-a084b12a83cc","Type":"ContainerStarted","Data":"a3d02424cbba941decf53a8b3be1e8c9e4da66aa2fae6e33ffbe73a1d3bd4092"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.491076 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8" event={"ID":"9b68a023-c6c2-458f-a714-a084b12a83cc","Type":"ContainerStarted","Data":"744c22c93f33f94d47c5fa3c398ee7b9990ec598ea47cee12101313e9bc25954"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.502146 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" event={"ID":"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2","Type":"ContainerStarted","Data":"1b9a1664d99cddf62e436d36d773c8c979a79abb0295c37f69f02587ac1b5dd0"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.537006 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kz2bg" podStartSLOduration=123.536978368 podStartE2EDuration="2m3.536978368s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.531411856 +0000 UTC m=+144.145739806" watchObservedRunningTime="2025-11-24 12:01:37.536978368 +0000 UTC m=+144.151306318" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.545982 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" event={"ID":"b654e8f6-b229-4515-92f7-68367ffa48a2","Type":"ContainerStarted","Data":"ff518cbd11dfa6287f0fb0fcd5a743779e2326c2f0a6bcf6b6aec320481d449a"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.558567 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" event={"ID":"4d408e31-d2da-4e32-b951-1900830ae33e","Type":"ContainerStarted","Data":"78094b2f8fb97e3bf58590fb85493389b8f88886753a5e1043656928cdce9754"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.574959 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.577411 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.077388494 +0000 UTC m=+144.691716444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.586061 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" event={"ID":"cc9af663-c7f1-485e-a7fc-709da901e9e1","Type":"ContainerStarted","Data":"948ff3e650a165eb67632236ebf0a12a5523f68a57050aaf54b54f9930d9455f"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.586706 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.593391 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdqq8" podStartSLOduration=123.59337363 podStartE2EDuration="2m3.59337363s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.59304205 +0000 UTC m=+144.207370010" watchObservedRunningTime="2025-11-24 12:01:37.59337363 +0000 UTC m=+144.207701580" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.593622 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2l9l6" podStartSLOduration=123.593616107 podStartE2EDuration="2m3.593616107s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.569177345 +0000 UTC m=+144.183505295" watchObservedRunningTime="2025-11-24 12:01:37.593616107 +0000 UTC m=+144.207944057" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.603479 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5bktz" event={"ID":"d6302d88-9f2e-49a0-b1af-8d14585b6e2a","Type":"ContainerStarted","Data":"59e0556862bab266830a68b55e609c4f25bdd4e2004d6710f375b5f50ef7ffaf"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.604921 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.623728 4930 patch_prober.go:28] interesting pod/console-operator-58897d9998-5bktz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.623801 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5bktz" podUID="d6302d88-9f2e-49a0-b1af-8d14585b6e2a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.628826 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" event={"ID":"bc1d3def-6313-4ed4-a518-341e82651b23","Type":"ContainerStarted","Data":"a390076ac2e378c12d81fe2d6aecf4a29f3fd6f2dde29471eceefd241c2fdb47"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.633688 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" event={"ID":"e4efb1d6-6d6d-4ac5-a3cb-e5da5bedb985","Type":"ContainerStarted","Data":"0c7e6e2e7f0491d7808189d0e424e912db941514d17d2c1270a35418c9b16fd3"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.634328 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" podStartSLOduration=123.634315051 podStartE2EDuration="2m3.634315051s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.634008663 +0000 UTC m=+144.248336623" watchObservedRunningTime="2025-11-24 12:01:37.634315051 +0000 UTC m=+144.248643001" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.648953 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" event={"ID":"fed4ab08-54d0-4526-bd9a-3d1e660fc31a","Type":"ContainerStarted","Data":"ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.649854 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.674971 4930 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4ksnz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.675068 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" podUID="fed4ab08-54d0-4526-bd9a-3d1e660fc31a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.676064 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.676179 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.17615674 +0000 UTC m=+144.790484690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.676484 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.677557 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.17753341 +0000 UTC m=+144.791861360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.694944 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" event={"ID":"28bc15a8-f8ed-4595-8a4f-e0d9e895c085","Type":"ContainerStarted","Data":"f1429a487d81f923f56d6ace032483c6f4407189e1e1d47c613f2d9769393c37"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.705969 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4xdgm" event={"ID":"c8f49176-755f-460b-857a-e82ee9abd6d7","Type":"ContainerStarted","Data":"28191f22387aadb73120d2c7fea15c0bab08c712e7712831830911bca3f522a1"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.707931 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5bktz" podStartSLOduration=123.707911664 podStartE2EDuration="2m3.707911664s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.7060647 +0000 UTC m=+144.320392650" watchObservedRunningTime="2025-11-24 12:01:37.707911664 +0000 UTC m=+144.322239614" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.710339 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mh45w" podStartSLOduration=123.710321154 podStartE2EDuration="2m3.710321154s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.659557706 +0000 UTC m=+144.273885656" watchObservedRunningTime="2025-11-24 12:01:37.710321154 +0000 UTC m=+144.324649104" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.718987 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" event={"ID":"ea65b02d-9e8a-4089-b867-d1c7cfb70df5","Type":"ContainerStarted","Data":"1e3b697adb5b8968f2e3cb95e2c09bbd89ced4a8621a4c7af9b675ed29884dfc"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.719037 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" event={"ID":"ea65b02d-9e8a-4089-b867-d1c7cfb70df5","Type":"ContainerStarted","Data":"96513d6402c96f0f5b4b9c3773f15e57a7079e09823e1e37516b9df1e0ec8dd1"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.750955 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" podStartSLOduration=122.750936766 podStartE2EDuration="2m2.750936766s" podCreationTimestamp="2025-11-24 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.733146819 +0000 UTC m=+144.347474769" watchObservedRunningTime="2025-11-24 12:01:37.750936766 +0000 UTC m=+144.365264716" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.768434 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" event={"ID":"080a5d44-2fa6-4e44-bd77-59047f85aea9","Type":"ContainerStarted","Data":"a6fe4431d1fa6d46b5987b3ae607e29c458129519869e59bd7c1b398afbf6f36"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.774479 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-65dmj" event={"ID":"7acbb8d1-6df2-4f3c-825a-d4f8104caceb","Type":"ContainerStarted","Data":"eec1c08ac1e422608d88bfd6e350591db9a0368896060a7b2f95a49c41435a27"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.777809 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.779244 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.27922299 +0000 UTC m=+144.893550930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.780894 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" event={"ID":"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3","Type":"ContainerStarted","Data":"7a1235ad986f1b91323cf322526535c61e52ad7d8cbec7ad5d07b31685b3ba71"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.781206 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" event={"ID":"53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3","Type":"ContainerStarted","Data":"8f303c52471edc278fd4f1c7c1550a68aaff3c9c72c632dbf10324d2bff3fc09"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.781488 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.785583 4930 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q5hrn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.785654 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" podUID="53cc9a8d-3376-4a2f-b3ea-0d3e6d5bacc3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.786514 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" event={"ID":"4431f8fc-1c3c-462c-ae64-b2ab77eb9d57","Type":"ContainerStarted","Data":"4b43800433c50fb3f14beb216fb496291c757765cbe0a6aa07013d355f58b7c0"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.787326 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.792572 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjpj4" podStartSLOduration=123.792529137 podStartE2EDuration="2m3.792529137s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.792070314 +0000 UTC m=+144.406398264" watchObservedRunningTime="2025-11-24 12:01:37.792529137 +0000 UTC m=+144.406857087" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.795994 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" podStartSLOduration=97.795970478 podStartE2EDuration="1m37.795970478s" podCreationTimestamp="2025-11-24 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.751654797 +0000 UTC m=+144.365982747" watchObservedRunningTime="2025-11-24 12:01:37.795970478 +0000 UTC m=+144.410298428" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.814031 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" event={"ID":"f956bae9-4db9-4698-bb42-5b6c872d8b35","Type":"ContainerStarted","Data":"2dbb21355e73a3c60f9b57a4f4c4c0f0b44ef1ee80f33c0ea34089f6cec1110c"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.818045 4930 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-brjtv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.818121 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" podUID="4431f8fc-1c3c-462c-ae64-b2ab77eb9d57" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.819351 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-65dmj" podStartSLOduration=6.819335448 podStartE2EDuration="6.819335448s" podCreationTimestamp="2025-11-24 12:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.818412851 +0000 UTC m=+144.432740801" watchObservedRunningTime="2025-11-24 12:01:37.819335448 +0000 UTC m=+144.433663398" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.830658 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" event={"ID":"7ef6223b-8ceb-4a44-b845-985899aff96b","Type":"ContainerStarted","Data":"1475b7c27597460f0591b54c47236dbb95d31aac7413bfdb07b9b0e6a2b8f995"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.832846 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" event={"ID":"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb","Type":"ContainerStarted","Data":"c44a775ae5793314b2a9bef92f745bec5a3fafa51131e1d19b647c105c330636"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.832874 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" event={"ID":"c4ccb41c-8cdd-4751-8012-49fae4dc2bcb","Type":"ContainerStarted","Data":"6d3de34209d2e86b08c640b1d52e451e5e5cd51102dedae88afc497b6ac4657e"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.845411 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bkp4v" event={"ID":"977b7ce9-3cab-4d86-b297-e062e48195b5","Type":"ContainerStarted","Data":"52d32aa65de686ea428431422688ed99888815566f32fa003e9ed1c9588c6843"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.845457 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bkp4v" event={"ID":"977b7ce9-3cab-4d86-b297-e062e48195b5","Type":"ContainerStarted","Data":"fe9ce2c5c3d4aa364e6e61db35f7761ec9c4f8572ba7e110a2b5a46a83b7fca3"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.861622 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" event={"ID":"44732887-85ec-4418-a663-c3a5504e926f","Type":"ContainerStarted","Data":"7329ecfea61f87cc3b67923e431cd9f40423cccb9e81898c66896d2d2cdd8e16"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.862656 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" podStartSLOduration=122.862646629 podStartE2EDuration="2m2.862646629s" podCreationTimestamp="2025-11-24 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.860660251 +0000 UTC m=+144.474988201" watchObservedRunningTime="2025-11-24 12:01:37.862646629 +0000 UTC m=+144.476974579" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.871206 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jkm8r" event={"ID":"b85d7650-00f5-41a0-b862-b884dd7190cc","Type":"ContainerStarted","Data":"f9662fc2abde4acd16a0f23adc07670b77f29e5994c52cc5ec39faa9bec5b5e2"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.877465 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" event={"ID":"71dbe23d-480e-43aa-8106-b19ae5b98734","Type":"ContainerStarted","Data":"cb5b3f4bfb9e17f80b13844ddac5f2f2fafceca1d0d79e68466cbb2c33bf1070"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.879927 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.883988 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.383973269 +0000 UTC m=+144.998301219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.885798 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" podStartSLOduration=122.885776892 podStartE2EDuration="2m2.885776892s" podCreationTimestamp="2025-11-24 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.885459113 +0000 UTC m=+144.499787083" watchObservedRunningTime="2025-11-24 12:01:37.885776892 +0000 UTC m=+144.500104852" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.900606 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" event={"ID":"b1ab09e9-91ba-481e-b364-12e2a90bed8e","Type":"ContainerStarted","Data":"7226b1538654c2a11bfddad031ab44b48675f690673bde6eb19836e3a69d0776"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.917292 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l4vrl" event={"ID":"84eba226-cf40-4011-a4a0-0cb9e774da5e","Type":"ContainerStarted","Data":"93a5e35cc17d41408258ff935629bc500443bc91d47753513ee764b7735eed4b"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.918464 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l4vrl" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.923317 4930 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4vrl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.923444 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l4vrl" podUID="84eba226-cf40-4011-a4a0-0cb9e774da5e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.924289 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jkm8r" podStartSLOduration=123.924266082 podStartE2EDuration="2m3.924266082s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.923913502 +0000 UTC m=+144.538241462" watchObservedRunningTime="2025-11-24 12:01:37.924266082 +0000 UTC m=+144.538594032" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.952913 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" event={"ID":"8eada43a-ea1e-4565-a042-716f030ba99d","Type":"ContainerStarted","Data":"db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.953899 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.954917 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-55nsz" podStartSLOduration=123.954896364 podStartE2EDuration="2m3.954896364s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.951822355 +0000 UTC m=+144.566150315" watchObservedRunningTime="2025-11-24 12:01:37.954896364 +0000 UTC m=+144.569224314" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.956444 4930 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dkr44 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.956488 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" podUID="8eada43a-ea1e-4565-a042-716f030ba99d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.981608 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" event={"ID":"f9dd2e0b-db34-4962-a370-03deea21911a","Type":"ContainerStarted","Data":"a305cfe4acc0b2bf68e7db9a2e9550eee360ab0acd432fb92bad29672e0c1838"} Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.981600 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-226nn" podStartSLOduration=123.981583781 podStartE2EDuration="2m3.981583781s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:37.98122093 +0000 UTC m=+144.595548880" watchObservedRunningTime="2025-11-24 12:01:37.981583781 +0000 UTC m=+144.595911731" Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.982113 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.982188 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.482175188 +0000 UTC m=+145.096503138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:37 crc kubenswrapper[4930]: I1124 12:01:37.985831 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:37 crc kubenswrapper[4930]: E1124 12:01:37.994874 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.494851487 +0000 UTC m=+145.109179437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.030532 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" event={"ID":"2920b6e9-9296-4249-a539-f84d65e0d79c","Type":"ContainerStarted","Data":"e8baf71cec5bbfd24914404bd48416b22cec2cfeaef48cf784918b6b20c16892"} Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.041351 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bkp4v" podStartSLOduration=7.04132254 podStartE2EDuration="7.04132254s" podCreationTimestamp="2025-11-24 12:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:38.025353795 +0000 UTC m=+144.639681745" watchObservedRunningTime="2025-11-24 12:01:38.04132254 +0000 UTC m=+144.655650480" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.064870 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" event={"ID":"f8908bf3-e171-4859-80c7-baa64ca6e11c","Type":"ContainerStarted","Data":"0e9a6a67a4f154aebce8a5b30f31f1590f3a0029827e843608b1f14ee9054fe4"} Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.066332 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dcf6j" podStartSLOduration=124.066321598 podStartE2EDuration="2m4.066321598s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:38.064903187 +0000 UTC m=+144.679231137" watchObservedRunningTime="2025-11-24 12:01:38.066321598 +0000 UTC m=+144.680649548" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.069107 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.097840 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.098770 4930 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qh578 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.098815 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" podUID="f8908bf3-e171-4859-80c7-baa64ca6e11c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.099729 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" event={"ID":"27e38351-809f-4f9e-9c07-7930a5db7b0b","Type":"ContainerStarted","Data":"d08c78bdc15fcb66b253b1cae0a6b61cd6cc604826efcfa2030c02ea81627e61"} Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.099784 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" event={"ID":"27e38351-809f-4f9e-9c07-7930a5db7b0b","Type":"ContainerStarted","Data":"164c1da2268712d6af076949386e4c261cddea914039c037434af0abd22319f7"} Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.100112 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.60009248 +0000 UTC m=+145.214420430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.100672 4930 generic.go:334] "Generic (PLEG): container finished" podID="d144e669-4571-4f1e-91f4-8584b50743ec" containerID="8dff1f55b5b8d3fd8e05e32a2917262454d13ed3d14bb506c7b16f5d90be1dab" exitCode=0 Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.102602 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" event={"ID":"d144e669-4571-4f1e-91f4-8584b50743ec","Type":"ContainerDied","Data":"8dff1f55b5b8d3fd8e05e32a2917262454d13ed3d14bb506c7b16f5d90be1dab"} Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.103468 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" podStartSLOduration=123.103448868 podStartE2EDuration="2m3.103448868s" podCreationTimestamp="2025-11-24 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:38.098125583 +0000 UTC m=+144.712453533" watchObservedRunningTime="2025-11-24 12:01:38.103448868 +0000 UTC m=+144.717776818" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.138134 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" podStartSLOduration=124.138097466 podStartE2EDuration="2m4.138097466s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:38.133451741 +0000 UTC m=+144.747779711" watchObservedRunningTime="2025-11-24 12:01:38.138097466 +0000 UTC m=+144.752425556" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.201034 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.206006 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.705986723 +0000 UTC m=+145.320314913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.214843 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-l4vrl" podStartSLOduration=124.21481968 podStartE2EDuration="2m4.21481968s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:38.178826232 +0000 UTC m=+144.793154192" watchObservedRunningTime="2025-11-24 12:01:38.21481968 +0000 UTC m=+144.829147630" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.263623 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7jz7" podStartSLOduration=124.26359439 podStartE2EDuration="2m4.26359439s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:38.21240798 +0000 UTC m=+144.826735930" watchObservedRunningTime="2025-11-24 12:01:38.26359439 +0000 UTC m=+144.877922380" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.292182 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" podStartSLOduration=124.292165962 podStartE2EDuration="2m4.292165962s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:38.291694978 +0000 UTC m=+144.906022928" watchObservedRunningTime="2025-11-24 12:01:38.292165962 +0000 UTC m=+144.906493912" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.312293 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.312833 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.812814253 +0000 UTC m=+145.427142213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.414411 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.415133 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:38.915094811 +0000 UTC m=+145.529422761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.516240 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.516456 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.01642315 +0000 UTC m=+145.630751100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.516683 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.517192 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.017175552 +0000 UTC m=+145.631503502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.567079 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.571393 4930 patch_prober.go:28] interesting pod/router-default-5444994796-jkm8r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:01:38 crc kubenswrapper[4930]: [-]has-synced failed: reason withheld Nov 24 12:01:38 crc kubenswrapper[4930]: [+]process-running ok Nov 24 12:01:38 crc kubenswrapper[4930]: healthz check failed Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.571467 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jkm8r" podUID="b85d7650-00f5-41a0-b862-b884dd7190cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.589081 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.589123 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.590464 4930 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-hrls7 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.590516 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" podUID="f9dd2e0b-db34-4962-a370-03deea21911a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.617765 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.617943 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.117917285 +0000 UTC m=+145.732245235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.618117 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.618467 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.118450791 +0000 UTC m=+145.732778741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.718928 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.719153 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.219118011 +0000 UTC m=+145.833445981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.720039 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.720268 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.220247834 +0000 UTC m=+145.834575784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.821399 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.821784 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.321758719 +0000 UTC m=+145.936086669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:38 crc kubenswrapper[4930]: I1124 12:01:38.922845 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:38 crc kubenswrapper[4930]: E1124 12:01:38.923171 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.423159601 +0000 UTC m=+146.037487551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.023636 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.023812 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.523784311 +0000 UTC m=+146.138112261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.024283 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.024750 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.524730348 +0000 UTC m=+146.139058358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.107552 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" event={"ID":"f956bae9-4db9-4698-bb42-5b6c872d8b35","Type":"ContainerStarted","Data":"6eaeb6c045deead5ede4890a79784d229321b454bd2312f8a875f734705a06ec"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.107781 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.109627 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" event={"ID":"b654e8f6-b229-4515-92f7-68367ffa48a2","Type":"ContainerStarted","Data":"9355045e8cb166dd33169a043879dafafe9695e475b8881ae087a8a9e767acbf"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.109742 4930 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-9d78l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" start-of-body= Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.109776 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" podUID="f956bae9-4db9-4698-bb42-5b6c872d8b35" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.114929 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" event={"ID":"28bc15a8-f8ed-4595-8a4f-e0d9e895c085","Type":"ContainerStarted","Data":"976fa73f567a85cca1850ac74c730b7be62ae0738dce24f13a1f977cd4bbfd2f"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.116368 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" event={"ID":"44732887-85ec-4418-a663-c3a5504e926f","Type":"ContainerStarted","Data":"fc202b1a4124427a75b5966276d752a0004081e266ea5933db8ceee9f7cd502d"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.122907 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" event={"ID":"b1ab09e9-91ba-481e-b364-12e2a90bed8e","Type":"ContainerStarted","Data":"e5417a3c24780511195d0f9e42cbf2bcb7420524db8e6967dae173cb8edf17fe"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.124758 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.124878 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.624861593 +0000 UTC m=+146.239189543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.124977 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.125726 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.625695947 +0000 UTC m=+146.240024017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.130296 4930 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c9vv5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.130367 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" podUID="bc1d3def-6313-4ed4-a518-341e82651b23" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.130710 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" event={"ID":"bc1d3def-6313-4ed4-a518-341e82651b23","Type":"ContainerStarted","Data":"f7299344326e7c0e9318e6d8a3b5638f299cbb3a307b2330cf7aec84b7a3ebf7"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.130749 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.140688 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" podStartSLOduration=125.140670033 podStartE2EDuration="2m5.140670033s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:39.139000915 +0000 UTC m=+145.753328865" watchObservedRunningTime="2025-11-24 12:01:39.140670033 +0000 UTC m=+145.754997983" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.143281 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nglrn" podStartSLOduration=124.143271919 podStartE2EDuration="2m4.143271919s" podCreationTimestamp="2025-11-24 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:38.334827234 +0000 UTC m=+144.949155184" watchObservedRunningTime="2025-11-24 12:01:39.143271919 +0000 UTC m=+145.757599869" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.164849 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" event={"ID":"f9ee2c17-9432-45c1-ad58-0c09f7d93ad2","Type":"ContainerStarted","Data":"9d2ae0105f04b9578b140d77bfd377dd143799864e1439a806b1c95b1dccd976"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.167038 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-kw8wv" podStartSLOduration=125.1670146 podStartE2EDuration="2m5.1670146s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:39.16492699 +0000 UTC m=+145.779254950" watchObservedRunningTime="2025-11-24 12:01:39.1670146 +0000 UTC m=+145.781342550" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.184048 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" event={"ID":"fabadb7c-e637-4769-b633-ea2b745bb9e4","Type":"ContainerStarted","Data":"86dd1a07b852acf15464cdd464b39088c876dd4482dabf878519a22aab2fb910"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.184094 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" event={"ID":"fabadb7c-e637-4769-b633-ea2b745bb9e4","Type":"ContainerStarted","Data":"042507f3661fde7552ec001d54fa5266fef63144f0eac10b4a7efe0af9068549"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.189052 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" podStartSLOduration=124.189033871 podStartE2EDuration="2m4.189033871s" podCreationTimestamp="2025-11-24 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:39.188958009 +0000 UTC m=+145.803285959" watchObservedRunningTime="2025-11-24 12:01:39.189033871 +0000 UTC m=+145.803361821" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.211867 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4xdgm" event={"ID":"c8f49176-755f-460b-857a-e82ee9abd6d7","Type":"ContainerStarted","Data":"b4bc533708f06fb06124908a27ff615ffd6033179a7824df4db59a2f33e3279a"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.211910 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4xdgm" event={"ID":"c8f49176-755f-460b-857a-e82ee9abd6d7","Type":"ContainerStarted","Data":"5889fb3b32cc5142ad584c5f8d9cacfc38f484b49bba2cd8ca5aa649856df898"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.212384 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.228089 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.229702 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.729678435 +0000 UTC m=+146.344006435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.240029 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" podStartSLOduration=126.240010135 podStartE2EDuration="2m6.240010135s" podCreationTimestamp="2025-11-24 11:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:39.216858251 +0000 UTC m=+145.831186221" watchObservedRunningTime="2025-11-24 12:01:39.240010135 +0000 UTC m=+145.854338085" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.249174 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" event={"ID":"4d408e31-d2da-4e32-b951-1900830ae33e","Type":"ContainerStarted","Data":"a334d5fee4042ada7705573cdf0d88a7130bb7c70a47f13fc023f406def1a309"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.249220 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" event={"ID":"4d408e31-d2da-4e32-b951-1900830ae33e","Type":"ContainerStarted","Data":"927897b69bc88db65be946cfd381737f63d6d9d80af4363b3e698768bff78ce8"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.309223 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pt8p8" podStartSLOduration=125.30920836 podStartE2EDuration="2m5.30920836s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:39.307759298 +0000 UTC m=+145.922087248" watchObservedRunningTime="2025-11-24 12:01:39.30920836 +0000 UTC m=+145.923536310" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.310074 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-s42xf" podStartSLOduration=125.310070415 podStartE2EDuration="2m5.310070415s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:39.239772609 +0000 UTC m=+145.854100559" watchObservedRunningTime="2025-11-24 12:01:39.310070415 +0000 UTC m=+145.924398365" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.313460 4930 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4vrl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.313981 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l4vrl" podUID="84eba226-cf40-4011-a4a0-0cb9e774da5e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.313677 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" event={"ID":"d144e669-4571-4f1e-91f4-8584b50743ec","Type":"ContainerStarted","Data":"129e24ef8f60a03f80f3c551e74b663f6856863f82e6ff2c72b9cb03d1de1742"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.314259 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" event={"ID":"d144e669-4571-4f1e-91f4-8584b50743ec","Type":"ContainerStarted","Data":"92740fa13156d7072fa326d797856542b79d6e221c016f3ea99f8fec2cbb2f5e"} Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.314785 4930 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qh578 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.314959 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" podUID="f8908bf3-e171-4859-80c7-baa64ca6e11c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.315007 4930 patch_prober.go:28] interesting pod/console-operator-58897d9998-5bktz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.315057 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5bktz" podUID="d6302d88-9f2e-49a0-b1af-8d14585b6e2a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.321987 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.326527 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brjtv" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.332249 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.333474 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.833462086 +0000 UTC m=+146.447790036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.351520 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.362774 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6sjlm" podStartSLOduration=124.362753079 podStartE2EDuration="2m4.362753079s" podCreationTimestamp="2025-11-24 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:39.347049641 +0000 UTC m=+145.961377591" watchObservedRunningTime="2025-11-24 12:01:39.362753079 +0000 UTC m=+145.977081029" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.378527 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-4xdgm" podStartSLOduration=8.378509287 podStartE2EDuration="8.378509287s" podCreationTimestamp="2025-11-24 12:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:39.374437139 +0000 UTC m=+145.988765089" watchObservedRunningTime="2025-11-24 12:01:39.378509287 +0000 UTC m=+145.992837237" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.433486 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.436707 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:39.936670601 +0000 UTC m=+146.550998551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.443118 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.443432 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.472688 4930 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xhvvt container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.472752 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" podUID="d144e669-4571-4f1e-91f4-8584b50743ec" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.546249 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.546898 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.046881649 +0000 UTC m=+146.661209599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.551645 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4f7j" podStartSLOduration=125.551620237 podStartE2EDuration="2m5.551620237s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:39.488964303 +0000 UTC m=+146.103292253" watchObservedRunningTime="2025-11-24 12:01:39.551620237 +0000 UTC m=+146.165948187" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.579731 4930 patch_prober.go:28] interesting pod/router-default-5444994796-jkm8r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:01:39 crc kubenswrapper[4930]: [-]has-synced failed: reason withheld Nov 24 12:01:39 crc kubenswrapper[4930]: [+]process-running ok Nov 24 12:01:39 crc kubenswrapper[4930]: healthz check failed Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.579806 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jkm8r" podUID="b85d7650-00f5-41a0-b862-b884dd7190cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.646965 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" podStartSLOduration=125.646950402 podStartE2EDuration="2m5.646950402s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:39.644671496 +0000 UTC m=+146.258999476" watchObservedRunningTime="2025-11-24 12:01:39.646950402 +0000 UTC m=+146.261278352" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.648058 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.648484 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.148471106 +0000 UTC m=+146.762799056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.656579 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jl279" Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.751714 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.752103 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.252089563 +0000 UTC m=+146.866417513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.852742 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.852923 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.352898318 +0000 UTC m=+146.967226268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.853068 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.853371 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.353364151 +0000 UTC m=+146.967692101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.954403 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.954633 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.454601898 +0000 UTC m=+147.068929848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.954784 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:39 crc kubenswrapper[4930]: E1124 12:01:39.955128 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.455117163 +0000 UTC m=+147.069445183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:39 crc kubenswrapper[4930]: I1124 12:01:39.972818 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q5hrn" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.055506 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.055868 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.555834836 +0000 UTC m=+147.170162786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.056102 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.056438 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.556429983 +0000 UTC m=+147.170757933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.157767 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.157956 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.657929778 +0000 UTC m=+147.272257718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.158071 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.158421 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.658404242 +0000 UTC m=+147.272732192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.258943 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.259197 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.759165345 +0000 UTC m=+147.373493295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.259273 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.259591 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.759583027 +0000 UTC m=+147.373910977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.263998 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4tv4h"] Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.264885 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:40 crc kubenswrapper[4930]: W1124 12:01:40.270394 4930 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.270451 4930 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.292095 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tv4h"] Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.331934 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" event={"ID":"44732887-85ec-4418-a663-c3a5504e926f","Type":"ContainerStarted","Data":"e664ee4fa58540e2e073d0696893319be39ab7a57a93f5de996f7f26974f1cf1"} Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.333594 4930 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4vrl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.333667 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l4vrl" podUID="84eba226-cf40-4011-a4a0-0cb9e774da5e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.346343 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.360474 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.360650 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.860619148 +0000 UTC m=+147.474947098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.360924 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkjkh\" (UniqueName: \"kubernetes.io/projected/bc7bba02-37bc-4786-bd0a-3b5710779d25-kube-api-access-tkjkh\") pod \"certified-operators-4tv4h\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.360969 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-catalog-content\") pod \"certified-operators-4tv4h\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.361141 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.361209 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-utilities\") pod \"certified-operators-4tv4h\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.361975 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.861954547 +0000 UTC m=+147.476282497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.404406 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c9vv5" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.462988 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.463302 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkjkh\" (UniqueName: \"kubernetes.io/projected/bc7bba02-37bc-4786-bd0a-3b5710779d25-kube-api-access-tkjkh\") pod \"certified-operators-4tv4h\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.463434 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-catalog-content\") pod \"certified-operators-4tv4h\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.464400 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-utilities\") pod \"certified-operators-4tv4h\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.465321 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:40.965295656 +0000 UTC m=+147.579623606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.488154 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-utilities\") pod \"certified-operators-4tv4h\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.488858 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-catalog-content\") pod \"certified-operators-4tv4h\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.499917 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ss26h"] Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.515164 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.520246 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.571520 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.571907 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.071879389 +0000 UTC m=+147.686207349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.577651 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkjkh\" (UniqueName: \"kubernetes.io/projected/bc7bba02-37bc-4786-bd0a-3b5710779d25-kube-api-access-tkjkh\") pod \"certified-operators-4tv4h\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.585347 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ss26h"] Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.597586 4930 patch_prober.go:28] interesting pod/router-default-5444994796-jkm8r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:01:40 crc kubenswrapper[4930]: [-]has-synced failed: reason withheld Nov 24 12:01:40 crc kubenswrapper[4930]: [+]process-running ok Nov 24 12:01:40 crc kubenswrapper[4930]: healthz check failed Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.597659 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jkm8r" podUID="b85d7650-00f5-41a0-b862-b884dd7190cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.672510 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.672796 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-utilities\") pod \"community-operators-ss26h\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.672862 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvb7j\" (UniqueName: \"kubernetes.io/projected/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-kube-api-access-wvb7j\") pod \"community-operators-ss26h\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.672888 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-catalog-content\") pod \"community-operators-ss26h\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.673105 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.173073805 +0000 UTC m=+147.787401755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.690734 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6hsrz"] Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.691674 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.706681 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5bktz" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.773934 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-utilities\") pod \"certified-operators-6hsrz\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.773981 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-utilities\") pod \"community-operators-ss26h\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.774007 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-catalog-content\") pod \"certified-operators-6hsrz\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.774040 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvb7j\" (UniqueName: \"kubernetes.io/projected/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-kube-api-access-wvb7j\") pod \"community-operators-ss26h\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.774064 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-catalog-content\") pod \"community-operators-ss26h\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.774191 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjhkm\" (UniqueName: \"kubernetes.io/projected/fe296064-195b-42d0-a0a1-8012587b8e04-kube-api-access-hjhkm\") pod \"certified-operators-6hsrz\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.774227 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.774609 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.27459724 +0000 UTC m=+147.888925190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.774851 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-catalog-content\") pod \"community-operators-ss26h\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.774945 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-utilities\") pod \"community-operators-ss26h\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.824147 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvb7j\" (UniqueName: \"kubernetes.io/projected/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-kube-api-access-wvb7j\") pod \"community-operators-ss26h\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.838899 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hsrz"] Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.882631 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k4kdl"] Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.883776 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.885116 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.885344 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjhkm\" (UniqueName: \"kubernetes.io/projected/fe296064-195b-42d0-a0a1-8012587b8e04-kube-api-access-hjhkm\") pod \"certified-operators-6hsrz\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.885407 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-utilities\") pod \"certified-operators-6hsrz\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.885432 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-catalog-content\") pod \"certified-operators-6hsrz\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.885868 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-catalog-content\") pod \"certified-operators-6hsrz\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.885937 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.385921361 +0000 UTC m=+148.000249311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.886358 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-utilities\") pod \"certified-operators-6hsrz\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.910933 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.934515 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k4kdl"] Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.979577 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjhkm\" (UniqueName: \"kubernetes.io/projected/fe296064-195b-42d0-a0a1-8012587b8e04-kube-api-access-hjhkm\") pod \"certified-operators-6hsrz\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.987562 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-catalog-content\") pod \"community-operators-k4kdl\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.987665 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.987728 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtnbv\" (UniqueName: \"kubernetes.io/projected/8b72afae-5c1d-429f-98b7-27368332e3b1-kube-api-access-mtnbv\") pod \"community-operators-k4kdl\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.987784 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-utilities\") pod \"community-operators-k4kdl\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.987817 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:40 crc kubenswrapper[4930]: I1124 12:01:40.987912 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:40 crc kubenswrapper[4930]: E1124 12:01:40.988390 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.488365653 +0000 UTC m=+148.102693603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:40.995681 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:40.998492 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.022290 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.089316 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.089476 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-catalog-content\") pod \"community-operators-k4kdl\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.089519 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtnbv\" (UniqueName: \"kubernetes.io/projected/8b72afae-5c1d-429f-98b7-27368332e3b1-kube-api-access-mtnbv\") pod \"community-operators-k4kdl\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.089568 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-utilities\") pod \"community-operators-k4kdl\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.089590 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.089616 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:41 crc kubenswrapper[4930]: E1124 12:01:41.090067 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.590048514 +0000 UTC m=+148.204376464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.090442 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-catalog-content\") pod \"community-operators-k4kdl\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.090680 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-utilities\") pod \"community-operators-k4kdl\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.095845 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.107193 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.125889 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.158432 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtnbv\" (UniqueName: \"kubernetes.io/projected/8b72afae-5c1d-429f-98b7-27368332e3b1-kube-api-access-mtnbv\") pod \"community-operators-k4kdl\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.193247 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:41 crc kubenswrapper[4930]: E1124 12:01:41.193558 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.693532276 +0000 UTC m=+148.307860226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.237903 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.300672 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.304025 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:41 crc kubenswrapper[4930]: E1124 12:01:41.304522 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.804502897 +0000 UTC m=+148.418830847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.316507 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.408828 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:41 crc kubenswrapper[4930]: E1124 12:01:41.409178 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:41.909164314 +0000 UTC m=+148.523492254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.447594 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" event={"ID":"44732887-85ec-4418-a663-c3a5504e926f","Type":"ContainerStarted","Data":"156b37f341198b82535c1cb19ce119e008bc762ed28d6aca60d15c61bbe2989b"} Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.514398 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:41 crc kubenswrapper[4930]: E1124 12:01:41.515273 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:42.015242862 +0000 UTC m=+148.629570822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.581512 4930 patch_prober.go:28] interesting pod/router-default-5444994796-jkm8r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:01:41 crc kubenswrapper[4930]: [-]has-synced failed: reason withheld Nov 24 12:01:41 crc kubenswrapper[4930]: [+]process-running ok Nov 24 12:01:41 crc kubenswrapper[4930]: healthz check failed Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.581590 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jkm8r" podUID="b85d7650-00f5-41a0-b862-b884dd7190cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.582622 4930 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/certified-operators-4tv4h" secret="" err="failed to sync secret cache: timed out waiting for the condition" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.582694 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.616497 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:41 crc kubenswrapper[4930]: E1124 12:01:41.619729 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:42.119698243 +0000 UTC m=+148.734026183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.679699 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.690663 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.719022 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:41 crc kubenswrapper[4930]: E1124 12:01:41.719454 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:42.219438366 +0000 UTC m=+148.833766316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.791222 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ss26h"] Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.821144 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:41 crc kubenswrapper[4930]: E1124 12:01:41.821501 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:42.321488317 +0000 UTC m=+148.935816267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:41 crc kubenswrapper[4930]: W1124 12:01:41.829408 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecbd0a96_64bb_4de8_8d4d_8861e24fd414.slice/crio-4eeb8df799e3fc31a1f0870579edf3045aaa68fb3622f2f00b9523cf479be74b WatchSource:0}: Error finding container 4eeb8df799e3fc31a1f0870579edf3045aaa68fb3622f2f00b9523cf479be74b: Status 404 returned error can't find the container with id 4eeb8df799e3fc31a1f0870579edf3045aaa68fb3622f2f00b9523cf479be74b Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.925160 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:41 crc kubenswrapper[4930]: E1124 12:01:41.929114 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:42.429086409 +0000 UTC m=+149.043414359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:41 crc kubenswrapper[4930]: I1124 12:01:41.953145 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k4kdl"] Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.030198 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.030520 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:42.530507952 +0000 UTC m=+149.144835892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.131360 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.131635 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:42.631620925 +0000 UTC m=+149.245948875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.236676 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.237079 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:42.737063555 +0000 UTC m=+149.351391505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.255522 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tv4h"] Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.269626 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.270367 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.275027 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.275075 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 12:01:42 crc kubenswrapper[4930]: W1124 12:01:42.285884 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc7bba02_37bc_4786_bd0a_3b5710779d25.slice/crio-530d4a56f44a899b55bb08fbd65a42a1b366d3b26f4a83437bf66bb1f0eca3b8 WatchSource:0}: Error finding container 530d4a56f44a899b55bb08fbd65a42a1b366d3b26f4a83437bf66bb1f0eca3b8: Status 404 returned error can't find the container with id 530d4a56f44a899b55bb08fbd65a42a1b366d3b26f4a83437bf66bb1f0eca3b8 Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.296755 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.337563 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.337905 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3a171f6-a50f-4a41-bd81-cab660b6f347-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b3a171f6-a50f-4a41-bd81-cab660b6f347\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.337944 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3a171f6-a50f-4a41-bd81-cab660b6f347-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b3a171f6-a50f-4a41-bd81-cab660b6f347\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.338034 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:42.838006783 +0000 UTC m=+149.452334733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.351455 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hsrz"] Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.397499 4930 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.441346 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3a171f6-a50f-4a41-bd81-cab660b6f347-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b3a171f6-a50f-4a41-bd81-cab660b6f347\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.441389 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3a171f6-a50f-4a41-bd81-cab660b6f347-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b3a171f6-a50f-4a41-bd81-cab660b6f347\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.441434 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.441732 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:42.941718703 +0000 UTC m=+149.556046653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.441879 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3a171f6-a50f-4a41-bd81-cab660b6f347-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b3a171f6-a50f-4a41-bd81-cab660b6f347\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.466633 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vggrt"] Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.467787 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.473004 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3a171f6-a50f-4a41-bd81-cab660b6f347-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b3a171f6-a50f-4a41-bd81-cab660b6f347\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.481099 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.485752 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7c0b12ec0e3c74d8cd42044d9e98dce8177a25612a80df9b89d8259b68a26fba"} Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.497512 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vggrt"] Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.522994 4930 generic.go:334] "Generic (PLEG): container finished" podID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerID="8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011" exitCode=0 Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.523116 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4kdl" event={"ID":"8b72afae-5c1d-429f-98b7-27368332e3b1","Type":"ContainerDied","Data":"8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011"} Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.523147 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4kdl" event={"ID":"8b72afae-5c1d-429f-98b7-27368332e3b1","Type":"ContainerStarted","Data":"4cddb3468c8b0d0d74565729b748df0bbeb993a9b6fe3306d83d47c8c866c814"} Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.529137 4930 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.533911 4930 generic.go:334] "Generic (PLEG): container finished" podID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerID="751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813" exitCode=0 Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.534039 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss26h" event={"ID":"ecbd0a96-64bb-4de8-8d4d-8861e24fd414","Type":"ContainerDied","Data":"751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813"} Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.534078 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss26h" event={"ID":"ecbd0a96-64bb-4de8-8d4d-8861e24fd414","Type":"ContainerStarted","Data":"4eeb8df799e3fc31a1f0870579edf3045aaa68fb3622f2f00b9523cf479be74b"} Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.543925 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.544164 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"38ebce16693ca9d76f27493ffd8da8748e9cc6d53e019cd8602d80a0070ae642"} Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.544306 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-catalog-content\") pod \"redhat-marketplace-vggrt\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.544373 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:43.04433558 +0000 UTC m=+149.658663530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.544471 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtsdh\" (UniqueName: \"kubernetes.io/projected/000050ff-5ba3-4660-be21-00afb861c946-kube-api-access-jtsdh\") pod \"redhat-marketplace-vggrt\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.544593 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-utilities\") pod \"redhat-marketplace-vggrt\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.544713 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.545175 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:43.045167114 +0000 UTC m=+149.659495064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.559461 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tv4h" event={"ID":"bc7bba02-37bc-4786-bd0a-3b5710779d25","Type":"ContainerStarted","Data":"530d4a56f44a899b55bb08fbd65a42a1b366d3b26f4a83437bf66bb1f0eca3b8"} Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.563888 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7c6b5e8d37d80ec56998711e6a6a224c20b0ade0d57fac7ed2bc08437ad93f43"} Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.571619 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hsrz" event={"ID":"fe296064-195b-42d0-a0a1-8012587b8e04","Type":"ContainerStarted","Data":"0b93d7e179bea2b9bee0285c0371d8c1da8245598bdd5411dd98bf866788f2ed"} Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.575766 4930 patch_prober.go:28] interesting pod/router-default-5444994796-jkm8r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:01:42 crc kubenswrapper[4930]: [-]has-synced failed: reason withheld Nov 24 12:01:42 crc kubenswrapper[4930]: [+]process-running ok Nov 24 12:01:42 crc kubenswrapper[4930]: healthz check failed Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.576004 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jkm8r" podUID="b85d7650-00f5-41a0-b862-b884dd7190cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.587802 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" event={"ID":"44732887-85ec-4418-a663-c3a5504e926f","Type":"ContainerStarted","Data":"e35b32a6b2c94b04bd67c01baac33d39eaf13c0ada99771f8f3f5ea5e1b80ae2"} Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.648116 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.648697 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-catalog-content\") pod \"redhat-marketplace-vggrt\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.648770 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtsdh\" (UniqueName: \"kubernetes.io/projected/000050ff-5ba3-4660-be21-00afb861c946-kube-api-access-jtsdh\") pod \"redhat-marketplace-vggrt\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.648862 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-utilities\") pod \"redhat-marketplace-vggrt\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.649378 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-utilities\") pod \"redhat-marketplace-vggrt\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.649529 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:43.149502682 +0000 UTC m=+149.763830632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.650280 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-catalog-content\") pod \"redhat-marketplace-vggrt\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.651019 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.706254 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtsdh\" (UniqueName: \"kubernetes.io/projected/000050ff-5ba3-4660-be21-00afb861c946-kube-api-access-jtsdh\") pod \"redhat-marketplace-vggrt\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.713247 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-b6v7j" podStartSLOduration=11.713211626 podStartE2EDuration="11.713211626s" podCreationTimestamp="2025-11-24 12:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:42.686923951 +0000 UTC m=+149.301251901" watchObservedRunningTime="2025-11-24 12:01:42.713211626 +0000 UTC m=+149.327539576" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.753294 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.753628 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:43.253615182 +0000 UTC m=+149.867943132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.826400 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.865242 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.865572 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:43.365553721 +0000 UTC m=+149.979881671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.871343 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nwf54"] Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.882071 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.887035 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwf54"] Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.966642 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.966725 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4tp5\" (UniqueName: \"kubernetes.io/projected/e23c1567-d78e-4ffe-b601-6e4c70486428-kube-api-access-v4tp5\") pod \"redhat-marketplace-nwf54\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.966755 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-utilities\") pod \"redhat-marketplace-nwf54\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:42 crc kubenswrapper[4930]: I1124 12:01:42.966822 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-catalog-content\") pod \"redhat-marketplace-nwf54\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:42 crc kubenswrapper[4930]: E1124 12:01:42.967262 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:43.467232581 +0000 UTC m=+150.081560541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.068078 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.068377 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4tp5\" (UniqueName: \"kubernetes.io/projected/e23c1567-d78e-4ffe-b601-6e4c70486428-kube-api-access-v4tp5\") pod \"redhat-marketplace-nwf54\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.068411 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-utilities\") pod \"redhat-marketplace-nwf54\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.068472 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-catalog-content\") pod \"redhat-marketplace-nwf54\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:43 crc kubenswrapper[4930]: E1124 12:01:43.068993 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:43.568972803 +0000 UTC m=+150.183300753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.068996 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-catalog-content\") pod \"redhat-marketplace-nwf54\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.069246 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-utilities\") pod \"redhat-marketplace-nwf54\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.094851 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4tp5\" (UniqueName: \"kubernetes.io/projected/e23c1567-d78e-4ffe-b601-6e4c70486428-kube-api-access-v4tp5\") pod \"redhat-marketplace-nwf54\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.170420 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:43 crc kubenswrapper[4930]: E1124 12:01:43.170990 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:01:43.670970692 +0000 UTC m=+150.285298642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fpv5v" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.212918 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vggrt"] Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.237446 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.277887 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:43 crc kubenswrapper[4930]: E1124 12:01:43.278386 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:01:43.778365929 +0000 UTC m=+150.392693879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.309684 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.314974 4930 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T12:01:42.397522856Z","Handler":null,"Name":""} Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.374077 4930 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.374523 4930 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.380272 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.382504 4930 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.382555 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.451467 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fpv5v\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.458527 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7td4t"] Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.459461 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.466000 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.482839 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7td4t"] Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.486595 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.493104 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.571131 4930 patch_prober.go:28] interesting pod/router-default-5444994796-jkm8r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:01:43 crc kubenswrapper[4930]: [-]has-synced failed: reason withheld Nov 24 12:01:43 crc kubenswrapper[4930]: [+]process-running ok Nov 24 12:01:43 crc kubenswrapper[4930]: healthz check failed Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.571396 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jkm8r" podUID="b85d7650-00f5-41a0-b862-b884dd7190cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.579644 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwf54"] Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.587845 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-utilities\") pod \"redhat-operators-7td4t\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.587951 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-catalog-content\") pod \"redhat-operators-7td4t\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.588012 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf79g\" (UniqueName: \"kubernetes.io/projected/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-kube-api-access-cf79g\") pod \"redhat-operators-7td4t\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: W1124 12:01:43.592649 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode23c1567_d78e_4ffe_b601_6e4c70486428.slice/crio-ab87a749c549f4739fe04b8989753d9f2968aff1edecbb1aa790995ef7ff7385 WatchSource:0}: Error finding container ab87a749c549f4739fe04b8989753d9f2968aff1edecbb1aa790995ef7ff7385: Status 404 returned error can't find the container with id ab87a749c549f4739fe04b8989753d9f2968aff1edecbb1aa790995ef7ff7385 Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.595585 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.609438 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hrls7" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.615590 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b3a171f6-a50f-4a41-bd81-cab660b6f347","Type":"ContainerStarted","Data":"f1c2a6f360e82428bf7a7df13a01bb94dda9815fe5c135d12f8364701fb9ce54"} Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.637590 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7c765b5bf7503dc5a98621e03142cb80c44cc8d3fe0767aaa50aae11287b4347"} Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.637819 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.640430 4930 generic.go:334] "Generic (PLEG): container finished" podID="000050ff-5ba3-4660-be21-00afb861c946" containerID="1c3b1e1b11b47600d4578ab099ce80e48222786772324f4df560592193ef7fed" exitCode=0 Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.640495 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vggrt" event={"ID":"000050ff-5ba3-4660-be21-00afb861c946","Type":"ContainerDied","Data":"1c3b1e1b11b47600d4578ab099ce80e48222786772324f4df560592193ef7fed"} Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.640520 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vggrt" event={"ID":"000050ff-5ba3-4660-be21-00afb861c946","Type":"ContainerStarted","Data":"a873cca958d075a02cc7b44f05888534b2392e6795dbb664d41bdedb1550ae70"} Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.647852 4930 generic.go:334] "Generic (PLEG): container finished" podID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerID="97d41366794a0f81e44b1003dba66e4934a1c82f7e17bf74e1f490a54f2d10c5" exitCode=0 Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.647981 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tv4h" event={"ID":"bc7bba02-37bc-4786-bd0a-3b5710779d25","Type":"ContainerDied","Data":"97d41366794a0f81e44b1003dba66e4934a1c82f7e17bf74e1f490a54f2d10c5"} Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.657027 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.659563 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"48073851588afb7a68628214fb80cabadec7c3cd0ca64534edc51586de261e64"} Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.670803 4930 generic.go:334] "Generic (PLEG): container finished" podID="fe296064-195b-42d0-a0a1-8012587b8e04" containerID="a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114" exitCode=0 Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.670867 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hsrz" event={"ID":"fe296064-195b-42d0-a0a1-8012587b8e04","Type":"ContainerDied","Data":"a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114"} Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.675729 4930 generic.go:334] "Generic (PLEG): container finished" podID="ea65b02d-9e8a-4089-b867-d1c7cfb70df5" containerID="1e3b697adb5b8968f2e3cb95e2c09bbd89ced4a8621a4c7af9b675ed29884dfc" exitCode=0 Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.675824 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" event={"ID":"ea65b02d-9e8a-4089-b867-d1c7cfb70df5","Type":"ContainerDied","Data":"1e3b697adb5b8968f2e3cb95e2c09bbd89ced4a8621a4c7af9b675ed29884dfc"} Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.692691 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-utilities\") pod \"redhat-operators-7td4t\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.699813 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e13022fc3d448851016ba59c6bcfb6cd6a49a5583e0635fe3b31d21b2fe1997b"} Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.699855 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-utilities\") pod \"redhat-operators-7td4t\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.699994 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-catalog-content\") pod \"redhat-operators-7td4t\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.700018 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf79g\" (UniqueName: \"kubernetes.io/projected/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-kube-api-access-cf79g\") pod \"redhat-operators-7td4t\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.700415 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-catalog-content\") pod \"redhat-operators-7td4t\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.734911 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.734973 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.767044 4930 patch_prober.go:28] interesting pod/console-f9d7485db-qfqmk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.767120 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qfqmk" podUID="507084c7-1280-4943-bff6-497f1dc21c0a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.779394 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf79g\" (UniqueName: \"kubernetes.io/projected/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-kube-api-access-cf79g\") pod \"redhat-operators-7td4t\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.788395 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.874442 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zwmmc"] Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.875747 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:43 crc kubenswrapper[4930]: I1124 12:01:43.891392 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwmmc"] Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.018004 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-catalog-content\") pod \"redhat-operators-zwmmc\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.018045 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pfsw\" (UniqueName: \"kubernetes.io/projected/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-kube-api-access-4pfsw\") pod \"redhat-operators-zwmmc\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.018143 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-utilities\") pod \"redhat-operators-zwmmc\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.122621 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-catalog-content\") pod \"redhat-operators-zwmmc\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.121532 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-catalog-content\") pod \"redhat-operators-zwmmc\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.122725 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pfsw\" (UniqueName: \"kubernetes.io/projected/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-kube-api-access-4pfsw\") pod \"redhat-operators-zwmmc\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.124015 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-utilities\") pod \"redhat-operators-zwmmc\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.125175 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-utilities\") pod \"redhat-operators-zwmmc\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.140581 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.153769 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pfsw\" (UniqueName: \"kubernetes.io/projected/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-kube-api-access-4pfsw\") pod \"redhat-operators-zwmmc\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.247721 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fpv5v"] Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.261711 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.315279 4930 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4vrl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.315345 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l4vrl" podUID="84eba226-cf40-4011-a4a0-0cb9e774da5e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.315626 4930 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4vrl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.315695 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l4vrl" podUID="84eba226-cf40-4011-a4a0-0cb9e774da5e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.456117 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.463608 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xhvvt" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.463690 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7td4t"] Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.568192 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.580430 4930 patch_prober.go:28] interesting pod/router-default-5444994796-jkm8r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:01:44 crc kubenswrapper[4930]: [-]has-synced failed: reason withheld Nov 24 12:01:44 crc kubenswrapper[4930]: [+]process-running ok Nov 24 12:01:44 crc kubenswrapper[4930]: healthz check failed Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.580490 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jkm8r" podUID="b85d7650-00f5-41a0-b862-b884dd7190cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.663924 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.664944 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.676036 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.680085 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.680227 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.780958 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7td4t" event={"ID":"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43","Type":"ContainerStarted","Data":"655f37fc6f85422579f26bd5dc46e4e719721cef962df15110f520d270dbc29a"} Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.811287 4930 generic.go:334] "Generic (PLEG): container finished" podID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerID="f4efd9c6dd8c4085d595a954cd74c0dac976eaa7b1280576db9c59a50f85b130" exitCode=0 Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.811957 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwf54" event={"ID":"e23c1567-d78e-4ffe-b601-6e4c70486428","Type":"ContainerDied","Data":"f4efd9c6dd8c4085d595a954cd74c0dac976eaa7b1280576db9c59a50f85b130"} Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.811998 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwf54" event={"ID":"e23c1567-d78e-4ffe-b601-6e4c70486428","Type":"ContainerStarted","Data":"ab87a749c549f4739fe04b8989753d9f2968aff1edecbb1aa790995ef7ff7385"} Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.822100 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3a171f6-a50f-4a41-bd81-cab660b6f347" containerID="d333e3f2b61d4cffd5484287a9e0239aaf747383a06e2608565527fe454e48c8" exitCode=0 Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.822171 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b3a171f6-a50f-4a41-bd81-cab660b6f347","Type":"ContainerDied","Data":"d333e3f2b61d4cffd5484287a9e0239aaf747383a06e2608565527fe454e48c8"} Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.840925 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" event={"ID":"d6022f6c-fa48-40b0-b2c2-e74b56071b38","Type":"ContainerStarted","Data":"1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94"} Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.840963 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" event={"ID":"d6022f6c-fa48-40b0-b2c2-e74b56071b38","Type":"ContainerStarted","Data":"bed49ad382eb252938a9134f63fc52f6eab46b9017725e30a2483322bac2210c"} Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.847162 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.847207 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.885001 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" podStartSLOduration=130.88497989 podStartE2EDuration="2m10.88497989s" podCreationTimestamp="2025-11-24 11:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:44.883915199 +0000 UTC m=+151.498243159" watchObservedRunningTime="2025-11-24 12:01:44.88497989 +0000 UTC m=+151.499307840" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.949516 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.950088 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:01:44 crc kubenswrapper[4930]: I1124 12:01:44.951194 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.014898 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.033080 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.074419 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwmmc"] Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.250954 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.361656 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-secret-volume\") pod \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.362390 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-config-volume\") pod \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.362429 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7flgk\" (UniqueName: \"kubernetes.io/projected/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-kube-api-access-7flgk\") pod \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\" (UID: \"ea65b02d-9e8a-4089-b867-d1c7cfb70df5\") " Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.365352 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea65b02d-9e8a-4089-b867-d1c7cfb70df5" (UID: "ea65b02d-9e8a-4089-b867-d1c7cfb70df5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.372346 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea65b02d-9e8a-4089-b867-d1c7cfb70df5" (UID: "ea65b02d-9e8a-4089-b867-d1c7cfb70df5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.374916 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-kube-api-access-7flgk" (OuterVolumeSpecName: "kube-api-access-7flgk") pod "ea65b02d-9e8a-4089-b867-d1c7cfb70df5" (UID: "ea65b02d-9e8a-4089-b867-d1c7cfb70df5"). InnerVolumeSpecName "kube-api-access-7flgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.464972 4930 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.465295 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7flgk\" (UniqueName: \"kubernetes.io/projected/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-kube-api-access-7flgk\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.465307 4930 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea65b02d-9e8a-4089-b867-d1c7cfb70df5-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.559359 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.572734 4930 patch_prober.go:28] interesting pod/router-default-5444994796-jkm8r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:01:45 crc kubenswrapper[4930]: [-]has-synced failed: reason withheld Nov 24 12:01:45 crc kubenswrapper[4930]: [+]process-running ok Nov 24 12:01:45 crc kubenswrapper[4930]: healthz check failed Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.572791 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jkm8r" podUID="b85d7650-00f5-41a0-b862-b884dd7190cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:01:45 crc kubenswrapper[4930]: W1124 12:01:45.640701 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod394cdc2d_42ad_46e4_9afb_cb4158ecc3a3.slice/crio-a9e78573a982920fd38e620b10027a7eba8f2cc8b3105e9ee88ddd5db7d428fa WatchSource:0}: Error finding container a9e78573a982920fd38e620b10027a7eba8f2cc8b3105e9ee88ddd5db7d428fa: Status 404 returned error can't find the container with id a9e78573a982920fd38e620b10027a7eba8f2cc8b3105e9ee88ddd5db7d428fa Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.868172 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3","Type":"ContainerStarted","Data":"a9e78573a982920fd38e620b10027a7eba8f2cc8b3105e9ee88ddd5db7d428fa"} Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.874231 4930 generic.go:334] "Generic (PLEG): container finished" podID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerID="b1ef8854c2b745b58b09f1bbc26a77aec4962d15a8d4df70ba0b88b59e76d186" exitCode=0 Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.874313 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7td4t" event={"ID":"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43","Type":"ContainerDied","Data":"b1ef8854c2b745b58b09f1bbc26a77aec4962d15a8d4df70ba0b88b59e76d186"} Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.896215 4930 generic.go:334] "Generic (PLEG): container finished" podID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerID="740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454" exitCode=0 Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.899715 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwmmc" event={"ID":"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739","Type":"ContainerDied","Data":"740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454"} Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.899911 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwmmc" event={"ID":"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739","Type":"ContainerStarted","Data":"2c7932c93efe9dce58d4915acbf06b4c6343e5a7e15aadb6041dae7881a67bf0"} Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.905153 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" event={"ID":"ea65b02d-9e8a-4089-b867-d1c7cfb70df5","Type":"ContainerDied","Data":"96513d6402c96f0f5b4b9c3773f15e57a7079e09823e1e37516b9df1e0ec8dd1"} Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.905193 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96513d6402c96f0f5b4b9c3773f15e57a7079e09823e1e37516b9df1e0ec8dd1" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.905497 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44" Nov 24 12:01:45 crc kubenswrapper[4930]: I1124 12:01:45.905589 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.228255 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.287304 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3a171f6-a50f-4a41-bd81-cab660b6f347-kubelet-dir\") pod \"b3a171f6-a50f-4a41-bd81-cab660b6f347\" (UID: \"b3a171f6-a50f-4a41-bd81-cab660b6f347\") " Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.287421 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3a171f6-a50f-4a41-bd81-cab660b6f347-kube-api-access\") pod \"b3a171f6-a50f-4a41-bd81-cab660b6f347\" (UID: \"b3a171f6-a50f-4a41-bd81-cab660b6f347\") " Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.289031 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3a171f6-a50f-4a41-bd81-cab660b6f347-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b3a171f6-a50f-4a41-bd81-cab660b6f347" (UID: "b3a171f6-a50f-4a41-bd81-cab660b6f347"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.293725 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3a171f6-a50f-4a41-bd81-cab660b6f347-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b3a171f6-a50f-4a41-bd81-cab660b6f347" (UID: "b3a171f6-a50f-4a41-bd81-cab660b6f347"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.389073 4930 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3a171f6-a50f-4a41-bd81-cab660b6f347-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.389105 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3a171f6-a50f-4a41-bd81-cab660b6f347-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.576595 4930 patch_prober.go:28] interesting pod/router-default-5444994796-jkm8r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:01:46 crc kubenswrapper[4930]: [-]has-synced failed: reason withheld Nov 24 12:01:46 crc kubenswrapper[4930]: [+]process-running ok Nov 24 12:01:46 crc kubenswrapper[4930]: healthz check failed Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.576637 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jkm8r" podUID="b85d7650-00f5-41a0-b862-b884dd7190cc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.946229 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b3a171f6-a50f-4a41-bd81-cab660b6f347","Type":"ContainerDied","Data":"f1c2a6f360e82428bf7a7df13a01bb94dda9815fe5c135d12f8364701fb9ce54"} Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.946742 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1c2a6f360e82428bf7a7df13a01bb94dda9815fe5c135d12f8364701fb9ce54" Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.946251 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:01:46 crc kubenswrapper[4930]: I1124 12:01:46.953344 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3","Type":"ContainerStarted","Data":"1979ddcad91faf8c8b34760d102e13ebc2478400358069846e6339e61d888c06"} Nov 24 12:01:47 crc kubenswrapper[4930]: I1124 12:01:47.579170 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:47 crc kubenswrapper[4930]: I1124 12:01:47.584144 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jkm8r" Nov 24 12:01:47 crc kubenswrapper[4930]: I1124 12:01:47.623403 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.623382199 podStartE2EDuration="3.623382199s" podCreationTimestamp="2025-11-24 12:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:46.974375606 +0000 UTC m=+153.588703556" watchObservedRunningTime="2025-11-24 12:01:47.623382199 +0000 UTC m=+154.237710149" Nov 24 12:01:47 crc kubenswrapper[4930]: I1124 12:01:47.982912 4930 generic.go:334] "Generic (PLEG): container finished" podID="394cdc2d-42ad-46e4-9afb-cb4158ecc3a3" containerID="1979ddcad91faf8c8b34760d102e13ebc2478400358069846e6339e61d888c06" exitCode=0 Nov 24 12:01:47 crc kubenswrapper[4930]: I1124 12:01:47.983004 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3","Type":"ContainerDied","Data":"1979ddcad91faf8c8b34760d102e13ebc2478400358069846e6339e61d888c06"} Nov 24 12:01:49 crc kubenswrapper[4930]: I1124 12:01:49.467268 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:01:49 crc kubenswrapper[4930]: I1124 12:01:49.583051 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kube-api-access\") pod \"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3\" (UID: \"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3\") " Nov 24 12:01:49 crc kubenswrapper[4930]: I1124 12:01:49.583131 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kubelet-dir\") pod \"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3\" (UID: \"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3\") " Nov 24 12:01:49 crc kubenswrapper[4930]: I1124 12:01:49.583385 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "394cdc2d-42ad-46e4-9afb-cb4158ecc3a3" (UID: "394cdc2d-42ad-46e4-9afb-cb4158ecc3a3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:01:49 crc kubenswrapper[4930]: I1124 12:01:49.590078 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "394cdc2d-42ad-46e4-9afb-cb4158ecc3a3" (UID: "394cdc2d-42ad-46e4-9afb-cb4158ecc3a3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:01:49 crc kubenswrapper[4930]: I1124 12:01:49.620017 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-4xdgm" Nov 24 12:01:49 crc kubenswrapper[4930]: I1124 12:01:49.684740 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:49 crc kubenswrapper[4930]: I1124 12:01:49.684775 4930 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/394cdc2d-42ad-46e4-9afb-cb4158ecc3a3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:50 crc kubenswrapper[4930]: I1124 12:01:50.006911 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"394cdc2d-42ad-46e4-9afb-cb4158ecc3a3","Type":"ContainerDied","Data":"a9e78573a982920fd38e620b10027a7eba8f2cc8b3105e9ee88ddd5db7d428fa"} Nov 24 12:01:50 crc kubenswrapper[4930]: I1124 12:01:50.006961 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e78573a982920fd38e620b10027a7eba8f2cc8b3105e9ee88ddd5db7d428fa" Nov 24 12:01:50 crc kubenswrapper[4930]: I1124 12:01:50.006977 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:01:51 crc kubenswrapper[4930]: I1124 12:01:51.030130 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-fvtkh_b654e8f6-b229-4515-92f7-68367ffa48a2/cluster-samples-operator/0.log" Nov 24 12:01:51 crc kubenswrapper[4930]: I1124 12:01:51.031162 4930 generic.go:334] "Generic (PLEG): container finished" podID="b654e8f6-b229-4515-92f7-68367ffa48a2" containerID="ff518cbd11dfa6287f0fb0fcd5a743779e2326c2f0a6bcf6b6aec320481d449a" exitCode=2 Nov 24 12:01:51 crc kubenswrapper[4930]: I1124 12:01:51.031196 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" event={"ID":"b654e8f6-b229-4515-92f7-68367ffa48a2","Type":"ContainerDied","Data":"ff518cbd11dfa6287f0fb0fcd5a743779e2326c2f0a6bcf6b6aec320481d449a"} Nov 24 12:01:51 crc kubenswrapper[4930]: I1124 12:01:51.033413 4930 scope.go:117] "RemoveContainer" containerID="ff518cbd11dfa6287f0fb0fcd5a743779e2326c2f0a6bcf6b6aec320481d449a" Nov 24 12:01:53 crc kubenswrapper[4930]: I1124 12:01:53.726507 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:53 crc kubenswrapper[4930]: I1124 12:01:53.731984 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:01:54 crc kubenswrapper[4930]: I1124 12:01:54.314655 4930 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4vrl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 24 12:01:54 crc kubenswrapper[4930]: I1124 12:01:54.314647 4930 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4vrl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 24 12:01:54 crc kubenswrapper[4930]: I1124 12:01:54.314707 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l4vrl" podUID="84eba226-cf40-4011-a4a0-0cb9e774da5e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 24 12:01:54 crc kubenswrapper[4930]: I1124 12:01:54.314708 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l4vrl" podUID="84eba226-cf40-4011-a4a0-0cb9e774da5e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 24 12:01:56 crc kubenswrapper[4930]: I1124 12:01:56.095757 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:56 crc kubenswrapper[4930]: I1124 12:01:56.114458 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96ced043-6cad-4f17-8648-624f36bf14f1-metrics-certs\") pod \"network-metrics-daemon-r4jtv\" (UID: \"96ced043-6cad-4f17-8648-624f36bf14f1\") " pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:56 crc kubenswrapper[4930]: I1124 12:01:56.322691 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4jtv" Nov 24 12:01:59 crc kubenswrapper[4930]: I1124 12:01:59.955051 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r4jtv"] Nov 24 12:01:59 crc kubenswrapper[4930]: W1124 12:01:59.964005 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96ced043_6cad_4f17_8648_624f36bf14f1.slice/crio-dbfe347ab15e681bc5c7f45cfcf333a9da5fc3802ae970b84d9958cf9871c7a1 WatchSource:0}: Error finding container dbfe347ab15e681bc5c7f45cfcf333a9da5fc3802ae970b84d9958cf9871c7a1: Status 404 returned error can't find the container with id dbfe347ab15e681bc5c7f45cfcf333a9da5fc3802ae970b84d9958cf9871c7a1 Nov 24 12:02:00 crc kubenswrapper[4930]: I1124 12:02:00.095804 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" event={"ID":"96ced043-6cad-4f17-8648-624f36bf14f1","Type":"ContainerStarted","Data":"dbfe347ab15e681bc5c7f45cfcf333a9da5fc3802ae970b84d9958cf9871c7a1"} Nov 24 12:02:01 crc kubenswrapper[4930]: I1124 12:02:01.104357 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" event={"ID":"96ced043-6cad-4f17-8648-624f36bf14f1","Type":"ContainerStarted","Data":"b97cb10e6213398970d53e142a6e6fd64bf9556233d3992c9feac99541cd35d0"} Nov 24 12:02:01 crc kubenswrapper[4930]: I1124 12:02:01.106963 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-fvtkh_b654e8f6-b229-4515-92f7-68367ffa48a2/cluster-samples-operator/0.log" Nov 24 12:02:01 crc kubenswrapper[4930]: I1124 12:02:01.107027 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" event={"ID":"b654e8f6-b229-4515-92f7-68367ffa48a2","Type":"ContainerStarted","Data":"b303fb33528838bd7ef76fa965d38ebc6fc6828c9d82577337d4468f24e4e8be"} Nov 24 12:02:01 crc kubenswrapper[4930]: I1124 12:02:01.809795 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:02:01 crc kubenswrapper[4930]: I1124 12:02:01.809853 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:02:02 crc kubenswrapper[4930]: I1124 12:02:02.115417 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-fvtkh_b654e8f6-b229-4515-92f7-68367ffa48a2/cluster-samples-operator/1.log" Nov 24 12:02:02 crc kubenswrapper[4930]: I1124 12:02:02.116401 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-fvtkh_b654e8f6-b229-4515-92f7-68367ffa48a2/cluster-samples-operator/0.log" Nov 24 12:02:02 crc kubenswrapper[4930]: I1124 12:02:02.116455 4930 generic.go:334] "Generic (PLEG): container finished" podID="b654e8f6-b229-4515-92f7-68367ffa48a2" containerID="b303fb33528838bd7ef76fa965d38ebc6fc6828c9d82577337d4468f24e4e8be" exitCode=2 Nov 24 12:02:02 crc kubenswrapper[4930]: I1124 12:02:02.116494 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" event={"ID":"b654e8f6-b229-4515-92f7-68367ffa48a2","Type":"ContainerDied","Data":"b303fb33528838bd7ef76fa965d38ebc6fc6828c9d82577337d4468f24e4e8be"} Nov 24 12:02:02 crc kubenswrapper[4930]: I1124 12:02:02.116570 4930 scope.go:117] "RemoveContainer" containerID="ff518cbd11dfa6287f0fb0fcd5a743779e2326c2f0a6bcf6b6aec320481d449a" Nov 24 12:02:02 crc kubenswrapper[4930]: I1124 12:02:02.117264 4930 scope.go:117] "RemoveContainer" containerID="b303fb33528838bd7ef76fa965d38ebc6fc6828c9d82577337d4468f24e4e8be" Nov 24 12:02:02 crc kubenswrapper[4930]: E1124 12:02:02.117567 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-samples-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-samples-operator pod=cluster-samples-operator-665b6dd947-fvtkh_openshift-cluster-samples-operator(b654e8f6-b229-4515-92f7-68367ffa48a2)\"" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" podUID="b654e8f6-b229-4515-92f7-68367ffa48a2" Nov 24 12:02:03 crc kubenswrapper[4930]: I1124 12:02:03.665749 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:02:04 crc kubenswrapper[4930]: I1124 12:02:04.330192 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l4vrl" Nov 24 12:02:13 crc kubenswrapper[4930]: I1124 12:02:13.596754 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-58bfl" Nov 24 12:02:15 crc kubenswrapper[4930]: E1124 12:02:15.568673 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 12:02:15 crc kubenswrapper[4930]: E1124 12:02:15.569627 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvb7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ss26h_openshift-marketplace(ecbd0a96-64bb-4de8-8d4d-8861e24fd414): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 12:02:15 crc kubenswrapper[4930]: E1124 12:02:15.571629 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ss26h" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" Nov 24 12:02:16 crc kubenswrapper[4930]: I1124 12:02:16.084738 4930 scope.go:117] "RemoveContainer" containerID="b303fb33528838bd7ef76fa965d38ebc6fc6828c9d82577337d4468f24e4e8be" Nov 24 12:02:16 crc kubenswrapper[4930]: E1124 12:02:16.352069 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ss26h" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" Nov 24 12:02:16 crc kubenswrapper[4930]: E1124 12:02:16.427221 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 12:02:16 crc kubenswrapper[4930]: E1124 12:02:16.427384 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtsdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vggrt_openshift-marketplace(000050ff-5ba3-4660-be21-00afb861c946): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 12:02:16 crc kubenswrapper[4930]: E1124 12:02:16.428578 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vggrt" podUID="000050ff-5ba3-4660-be21-00afb861c946" Nov 24 12:02:16 crc kubenswrapper[4930]: E1124 12:02:16.462978 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 12:02:16 crc kubenswrapper[4930]: E1124 12:02:16.463627 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4tp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nwf54_openshift-marketplace(e23c1567-d78e-4ffe-b601-6e4c70486428): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 12:02:16 crc kubenswrapper[4930]: E1124 12:02:16.464782 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nwf54" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" Nov 24 12:02:16 crc kubenswrapper[4930]: E1124 12:02:16.487138 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 12:02:16 crc kubenswrapper[4930]: E1124 12:02:16.487276 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtnbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k4kdl_openshift-marketplace(8b72afae-5c1d-429f-98b7-27368332e3b1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 12:02:16 crc kubenswrapper[4930]: E1124 12:02:16.488455 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-k4kdl" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" Nov 24 12:02:17 crc kubenswrapper[4930]: I1124 12:02:17.195517 4930 generic.go:334] "Generic (PLEG): container finished" podID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerID="76e42f2fe966bdabe8e3eb1b1108708198eb2ef3bbd28e79d92d8afe867b57d9" exitCode=0 Nov 24 12:02:17 crc kubenswrapper[4930]: I1124 12:02:17.195593 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tv4h" event={"ID":"bc7bba02-37bc-4786-bd0a-3b5710779d25","Type":"ContainerDied","Data":"76e42f2fe966bdabe8e3eb1b1108708198eb2ef3bbd28e79d92d8afe867b57d9"} Nov 24 12:02:17 crc kubenswrapper[4930]: I1124 12:02:17.202845 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7td4t" event={"ID":"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43","Type":"ContainerStarted","Data":"29471652439bbb29c48825eafb23c0d462939bd3fef97e218bde9e4435bd8b6c"} Nov 24 12:02:17 crc kubenswrapper[4930]: I1124 12:02:17.207681 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-fvtkh_b654e8f6-b229-4515-92f7-68367ffa48a2/cluster-samples-operator/1.log" Nov 24 12:02:17 crc kubenswrapper[4930]: I1124 12:02:17.208194 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvtkh" event={"ID":"b654e8f6-b229-4515-92f7-68367ffa48a2","Type":"ContainerStarted","Data":"fdd125a45c96e81309acce22a10b376bd8687eb6ffe1d8f2e5efb15e7a6e4626"} Nov 24 12:02:17 crc kubenswrapper[4930]: I1124 12:02:17.211512 4930 generic.go:334] "Generic (PLEG): container finished" podID="fe296064-195b-42d0-a0a1-8012587b8e04" containerID="b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890" exitCode=0 Nov 24 12:02:17 crc kubenswrapper[4930]: I1124 12:02:17.211635 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hsrz" event={"ID":"fe296064-195b-42d0-a0a1-8012587b8e04","Type":"ContainerDied","Data":"b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890"} Nov 24 12:02:17 crc kubenswrapper[4930]: I1124 12:02:17.215276 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwmmc" event={"ID":"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739","Type":"ContainerStarted","Data":"d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27"} Nov 24 12:02:17 crc kubenswrapper[4930]: I1124 12:02:17.222328 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r4jtv" event={"ID":"96ced043-6cad-4f17-8648-624f36bf14f1","Type":"ContainerStarted","Data":"2b2b3f9f90b33291f9c955e28cf3f55087b6b4faa21a6663deadd0df21f7c428"} Nov 24 12:02:17 crc kubenswrapper[4930]: E1124 12:02:17.222478 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vggrt" podUID="000050ff-5ba3-4660-be21-00afb861c946" Nov 24 12:02:17 crc kubenswrapper[4930]: E1124 12:02:17.223005 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nwf54" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" Nov 24 12:02:17 crc kubenswrapper[4930]: E1124 12:02:17.226006 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-k4kdl" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" Nov 24 12:02:17 crc kubenswrapper[4930]: I1124 12:02:17.302463 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-r4jtv" podStartSLOduration=164.302441741 podStartE2EDuration="2m44.302441741s" podCreationTimestamp="2025-11-24 11:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:02:17.300840654 +0000 UTC m=+183.915168614" watchObservedRunningTime="2025-11-24 12:02:17.302441741 +0000 UTC m=+183.916769691" Nov 24 12:02:18 crc kubenswrapper[4930]: I1124 12:02:18.229995 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tv4h" event={"ID":"bc7bba02-37bc-4786-bd0a-3b5710779d25","Type":"ContainerStarted","Data":"75ef6df97721fabe29828e07c3051ca72e380a0b2df4beb88797c9113fe51067"} Nov 24 12:02:18 crc kubenswrapper[4930]: I1124 12:02:18.232407 4930 generic.go:334] "Generic (PLEG): container finished" podID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerID="29471652439bbb29c48825eafb23c0d462939bd3fef97e218bde9e4435bd8b6c" exitCode=0 Nov 24 12:02:18 crc kubenswrapper[4930]: I1124 12:02:18.232502 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7td4t" event={"ID":"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43","Type":"ContainerDied","Data":"29471652439bbb29c48825eafb23c0d462939bd3fef97e218bde9e4435bd8b6c"} Nov 24 12:02:18 crc kubenswrapper[4930]: I1124 12:02:18.235628 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hsrz" event={"ID":"fe296064-195b-42d0-a0a1-8012587b8e04","Type":"ContainerStarted","Data":"7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7"} Nov 24 12:02:18 crc kubenswrapper[4930]: I1124 12:02:18.240647 4930 generic.go:334] "Generic (PLEG): container finished" podID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerID="d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27" exitCode=0 Nov 24 12:02:18 crc kubenswrapper[4930]: I1124 12:02:18.240756 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwmmc" event={"ID":"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739","Type":"ContainerDied","Data":"d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27"} Nov 24 12:02:18 crc kubenswrapper[4930]: I1124 12:02:18.240803 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwmmc" event={"ID":"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739","Type":"ContainerStarted","Data":"2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80"} Nov 24 12:02:18 crc kubenswrapper[4930]: I1124 12:02:18.265163 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4tv4h" podStartSLOduration=4.185345373 podStartE2EDuration="38.265137145s" podCreationTimestamp="2025-11-24 12:01:40 +0000 UTC" firstStartedPulling="2025-11-24 12:01:43.651988956 +0000 UTC m=+150.266316906" lastFinishedPulling="2025-11-24 12:02:17.731780728 +0000 UTC m=+184.346108678" observedRunningTime="2025-11-24 12:02:18.262524319 +0000 UTC m=+184.876852269" watchObservedRunningTime="2025-11-24 12:02:18.265137145 +0000 UTC m=+184.879465115" Nov 24 12:02:18 crc kubenswrapper[4930]: I1124 12:02:18.317244 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6hsrz" podStartSLOduration=4.468166346 podStartE2EDuration="38.317226252s" podCreationTimestamp="2025-11-24 12:01:40 +0000 UTC" firstStartedPulling="2025-11-24 12:01:43.732930982 +0000 UTC m=+150.347258932" lastFinishedPulling="2025-11-24 12:02:17.581990888 +0000 UTC m=+184.196318838" observedRunningTime="2025-11-24 12:02:18.316865361 +0000 UTC m=+184.931193331" watchObservedRunningTime="2025-11-24 12:02:18.317226252 +0000 UTC m=+184.931554202" Nov 24 12:02:18 crc kubenswrapper[4930]: I1124 12:02:18.339036 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zwmmc" podStartSLOduration=3.535940212 podStartE2EDuration="35.339016606s" podCreationTimestamp="2025-11-24 12:01:43 +0000 UTC" firstStartedPulling="2025-11-24 12:01:45.903351796 +0000 UTC m=+152.517679746" lastFinishedPulling="2025-11-24 12:02:17.70642816 +0000 UTC m=+184.320756140" observedRunningTime="2025-11-24 12:02:18.336797261 +0000 UTC m=+184.951125221" watchObservedRunningTime="2025-11-24 12:02:18.339016606 +0000 UTC m=+184.953344566" Nov 24 12:02:19 crc kubenswrapper[4930]: I1124 12:02:19.250732 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7td4t" event={"ID":"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43","Type":"ContainerStarted","Data":"4303867d1df45887dd04ef0113c40b7ec05c4952cc50b40cd33f398b1866669a"} Nov 24 12:02:19 crc kubenswrapper[4930]: I1124 12:02:19.275380 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7td4t" podStartSLOduration=3.516030453 podStartE2EDuration="36.275360635s" podCreationTimestamp="2025-11-24 12:01:43 +0000 UTC" firstStartedPulling="2025-11-24 12:01:45.893767457 +0000 UTC m=+152.508095407" lastFinishedPulling="2025-11-24 12:02:18.653097639 +0000 UTC m=+185.267425589" observedRunningTime="2025-11-24 12:02:19.27143706 +0000 UTC m=+185.885765010" watchObservedRunningTime="2025-11-24 12:02:19.275360635 +0000 UTC m=+185.889688585" Nov 24 12:02:21 crc kubenswrapper[4930]: I1124 12:02:21.307936 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:02:21 crc kubenswrapper[4930]: I1124 12:02:21.583835 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:02:21 crc kubenswrapper[4930]: I1124 12:02:21.583899 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:02:21 crc kubenswrapper[4930]: I1124 12:02:21.691772 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:02:21 crc kubenswrapper[4930]: I1124 12:02:21.691853 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:02:21 crc kubenswrapper[4930]: I1124 12:02:21.793074 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:02:21 crc kubenswrapper[4930]: I1124 12:02:21.798096 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:02:22 crc kubenswrapper[4930]: I1124 12:02:22.212034 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9d78l"] Nov 24 12:02:22 crc kubenswrapper[4930]: I1124 12:02:22.347688 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:02:22 crc kubenswrapper[4930]: I1124 12:02:22.416603 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:02:23 crc kubenswrapper[4930]: I1124 12:02:23.789183 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:02:23 crc kubenswrapper[4930]: I1124 12:02:23.789653 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.261932 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.262003 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.439451 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hsrz"] Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.442883 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6hsrz" podUID="fe296064-195b-42d0-a0a1-8012587b8e04" containerName="registry-server" containerID="cri-o://7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7" gracePeriod=2 Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.831304 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7td4t" podUID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerName="registry-server" probeResult="failure" output=< Nov 24 12:02:24 crc kubenswrapper[4930]: timeout: failed to connect service ":50051" within 1s Nov 24 12:02:24 crc kubenswrapper[4930]: > Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.834285 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.935284 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjhkm\" (UniqueName: \"kubernetes.io/projected/fe296064-195b-42d0-a0a1-8012587b8e04-kube-api-access-hjhkm\") pod \"fe296064-195b-42d0-a0a1-8012587b8e04\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.935392 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-utilities\") pod \"fe296064-195b-42d0-a0a1-8012587b8e04\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.935471 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-catalog-content\") pod \"fe296064-195b-42d0-a0a1-8012587b8e04\" (UID: \"fe296064-195b-42d0-a0a1-8012587b8e04\") " Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.936554 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-utilities" (OuterVolumeSpecName: "utilities") pod "fe296064-195b-42d0-a0a1-8012587b8e04" (UID: "fe296064-195b-42d0-a0a1-8012587b8e04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.941967 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe296064-195b-42d0-a0a1-8012587b8e04-kube-api-access-hjhkm" (OuterVolumeSpecName: "kube-api-access-hjhkm") pod "fe296064-195b-42d0-a0a1-8012587b8e04" (UID: "fe296064-195b-42d0-a0a1-8012587b8e04"). InnerVolumeSpecName "kube-api-access-hjhkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.945363 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjhkm\" (UniqueName: \"kubernetes.io/projected/fe296064-195b-42d0-a0a1-8012587b8e04-kube-api-access-hjhkm\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.945413 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:24 crc kubenswrapper[4930]: I1124 12:02:24.988773 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe296064-195b-42d0-a0a1-8012587b8e04" (UID: "fe296064-195b-42d0-a0a1-8012587b8e04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.047133 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe296064-195b-42d0-a0a1-8012587b8e04-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.282354 4930 generic.go:334] "Generic (PLEG): container finished" podID="fe296064-195b-42d0-a0a1-8012587b8e04" containerID="7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7" exitCode=0 Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.282423 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hsrz" event={"ID":"fe296064-195b-42d0-a0a1-8012587b8e04","Type":"ContainerDied","Data":"7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7"} Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.282448 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hsrz" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.282478 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hsrz" event={"ID":"fe296064-195b-42d0-a0a1-8012587b8e04","Type":"ContainerDied","Data":"0b93d7e179bea2b9bee0285c0371d8c1da8245598bdd5411dd98bf866788f2ed"} Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.282509 4930 scope.go:117] "RemoveContainer" containerID="7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.301588 4930 scope.go:117] "RemoveContainer" containerID="b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.302093 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zwmmc" podUID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerName="registry-server" probeResult="failure" output=< Nov 24 12:02:25 crc kubenswrapper[4930]: timeout: failed to connect service ":50051" within 1s Nov 24 12:02:25 crc kubenswrapper[4930]: > Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.324603 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hsrz"] Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.334004 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6hsrz"] Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.340810 4930 scope.go:117] "RemoveContainer" containerID="a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.357677 4930 scope.go:117] "RemoveContainer" containerID="7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7" Nov 24 12:02:25 crc kubenswrapper[4930]: E1124 12:02:25.358314 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7\": container with ID starting with 7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7 not found: ID does not exist" containerID="7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.358356 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7"} err="failed to get container status \"7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7\": rpc error: code = NotFound desc = could not find container \"7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7\": container with ID starting with 7cef1f96a40d6ee6c9f0c337368a7d4b0b470adbb229204dba22f9c061cb27c7 not found: ID does not exist" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.358407 4930 scope.go:117] "RemoveContainer" containerID="b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890" Nov 24 12:02:25 crc kubenswrapper[4930]: E1124 12:02:25.358714 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890\": container with ID starting with b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890 not found: ID does not exist" containerID="b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.358767 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890"} err="failed to get container status \"b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890\": rpc error: code = NotFound desc = could not find container \"b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890\": container with ID starting with b179a2ac023151c87640b188c871bb759a315568d8266f54349f5cacb9be0890 not found: ID does not exist" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.358801 4930 scope.go:117] "RemoveContainer" containerID="a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114" Nov 24 12:02:25 crc kubenswrapper[4930]: E1124 12:02:25.359171 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114\": container with ID starting with a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114 not found: ID does not exist" containerID="a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114" Nov 24 12:02:25 crc kubenswrapper[4930]: I1124 12:02:25.359221 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114"} err="failed to get container status \"a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114\": rpc error: code = NotFound desc = could not find container \"a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114\": container with ID starting with a25824c0f75a6ef4878ae2c2a02cebbdfcd91d71de51f4aff84589a710d8d114 not found: ID does not exist" Nov 24 12:02:26 crc kubenswrapper[4930]: I1124 12:02:26.091596 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe296064-195b-42d0-a0a1-8012587b8e04" path="/var/lib/kubelet/pods/fe296064-195b-42d0-a0a1-8012587b8e04/volumes" Nov 24 12:02:31 crc kubenswrapper[4930]: I1124 12:02:31.344431 4930 generic.go:334] "Generic (PLEG): container finished" podID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerID="855e66e2d730fdb02acb93090196c55e61270e1692457655a4049cc2276769f3" exitCode=0 Nov 24 12:02:31 crc kubenswrapper[4930]: I1124 12:02:31.344502 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwf54" event={"ID":"e23c1567-d78e-4ffe-b601-6e4c70486428","Type":"ContainerDied","Data":"855e66e2d730fdb02acb93090196c55e61270e1692457655a4049cc2276769f3"} Nov 24 12:02:31 crc kubenswrapper[4930]: I1124 12:02:31.346893 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss26h" event={"ID":"ecbd0a96-64bb-4de8-8d4d-8861e24fd414","Type":"ContainerStarted","Data":"498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a"} Nov 24 12:02:31 crc kubenswrapper[4930]: I1124 12:02:31.809477 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:02:31 crc kubenswrapper[4930]: I1124 12:02:31.809567 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:02:32 crc kubenswrapper[4930]: I1124 12:02:32.356953 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwf54" event={"ID":"e23c1567-d78e-4ffe-b601-6e4c70486428","Type":"ContainerStarted","Data":"a723c1c3fbc56316b3f76c9b5a3c6d0448cfe3e9f394d3370cb53056a09766de"} Nov 24 12:02:32 crc kubenswrapper[4930]: I1124 12:02:32.359594 4930 generic.go:334] "Generic (PLEG): container finished" podID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerID="498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a" exitCode=0 Nov 24 12:02:32 crc kubenswrapper[4930]: I1124 12:02:32.359634 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss26h" event={"ID":"ecbd0a96-64bb-4de8-8d4d-8861e24fd414","Type":"ContainerDied","Data":"498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a"} Nov 24 12:02:32 crc kubenswrapper[4930]: I1124 12:02:32.376824 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nwf54" podStartSLOduration=3.196819492 podStartE2EDuration="50.37680521s" podCreationTimestamp="2025-11-24 12:01:42 +0000 UTC" firstStartedPulling="2025-11-24 12:01:44.815472107 +0000 UTC m=+151.429800057" lastFinishedPulling="2025-11-24 12:02:31.995457825 +0000 UTC m=+198.609785775" observedRunningTime="2025-11-24 12:02:32.374585162 +0000 UTC m=+198.988913132" watchObservedRunningTime="2025-11-24 12:02:32.37680521 +0000 UTC m=+198.991133160" Nov 24 12:02:33 crc kubenswrapper[4930]: I1124 12:02:33.241649 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:02:33 crc kubenswrapper[4930]: I1124 12:02:33.241899 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:02:33 crc kubenswrapper[4930]: I1124 12:02:33.291834 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:02:33 crc kubenswrapper[4930]: I1124 12:02:33.829730 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:02:33 crc kubenswrapper[4930]: I1124 12:02:33.875562 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:02:34 crc kubenswrapper[4930]: I1124 12:02:34.313493 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:02:34 crc kubenswrapper[4930]: I1124 12:02:34.359321 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:02:34 crc kubenswrapper[4930]: I1124 12:02:34.378215 4930 generic.go:334] "Generic (PLEG): container finished" podID="000050ff-5ba3-4660-be21-00afb861c946" containerID="219e9572546020c067cec54b04abeae3a965b2e440518a3908ba0b2bd6dd5e78" exitCode=0 Nov 24 12:02:34 crc kubenswrapper[4930]: I1124 12:02:34.378261 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vggrt" event={"ID":"000050ff-5ba3-4660-be21-00afb861c946","Type":"ContainerDied","Data":"219e9572546020c067cec54b04abeae3a965b2e440518a3908ba0b2bd6dd5e78"} Nov 24 12:02:34 crc kubenswrapper[4930]: I1124 12:02:34.380646 4930 generic.go:334] "Generic (PLEG): container finished" podID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerID="b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8" exitCode=0 Nov 24 12:02:34 crc kubenswrapper[4930]: I1124 12:02:34.380707 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4kdl" event={"ID":"8b72afae-5c1d-429f-98b7-27368332e3b1","Type":"ContainerDied","Data":"b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8"} Nov 24 12:02:34 crc kubenswrapper[4930]: I1124 12:02:34.393021 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss26h" event={"ID":"ecbd0a96-64bb-4de8-8d4d-8861e24fd414","Type":"ContainerStarted","Data":"cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d"} Nov 24 12:02:34 crc kubenswrapper[4930]: I1124 12:02:34.428855 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ss26h" podStartSLOduration=2.813682989 podStartE2EDuration="54.428803646s" podCreationTimestamp="2025-11-24 12:01:40 +0000 UTC" firstStartedPulling="2025-11-24 12:01:42.537256704 +0000 UTC m=+149.151584654" lastFinishedPulling="2025-11-24 12:02:34.152377361 +0000 UTC m=+200.766705311" observedRunningTime="2025-11-24 12:02:34.423827294 +0000 UTC m=+201.038155244" watchObservedRunningTime="2025-11-24 12:02:34.428803646 +0000 UTC m=+201.043131606" Nov 24 12:02:35 crc kubenswrapper[4930]: I1124 12:02:35.403308 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4kdl" event={"ID":"8b72afae-5c1d-429f-98b7-27368332e3b1","Type":"ContainerStarted","Data":"5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80"} Nov 24 12:02:35 crc kubenswrapper[4930]: I1124 12:02:35.406528 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vggrt" event={"ID":"000050ff-5ba3-4660-be21-00afb861c946","Type":"ContainerStarted","Data":"5228f2da0d0d57ecdb94cdcc8d4463fcdd425e51f7dc1420b22c50d7ba9cc6f2"} Nov 24 12:02:35 crc kubenswrapper[4930]: I1124 12:02:35.434696 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k4kdl" podStartSLOduration=3.169472252 podStartE2EDuration="55.43467058s" podCreationTimestamp="2025-11-24 12:01:40 +0000 UTC" firstStartedPulling="2025-11-24 12:01:42.528829709 +0000 UTC m=+149.143157659" lastFinishedPulling="2025-11-24 12:02:34.794028037 +0000 UTC m=+201.408355987" observedRunningTime="2025-11-24 12:02:35.430324637 +0000 UTC m=+202.044652587" watchObservedRunningTime="2025-11-24 12:02:35.43467058 +0000 UTC m=+202.048998520" Nov 24 12:02:35 crc kubenswrapper[4930]: I1124 12:02:35.455671 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vggrt" podStartSLOduration=2.258613295 podStartE2EDuration="53.455643624s" podCreationTimestamp="2025-11-24 12:01:42 +0000 UTC" firstStartedPulling="2025-11-24 12:01:43.645809886 +0000 UTC m=+150.260137836" lastFinishedPulling="2025-11-24 12:02:34.842840215 +0000 UTC m=+201.457168165" observedRunningTime="2025-11-24 12:02:35.454027364 +0000 UTC m=+202.068355324" watchObservedRunningTime="2025-11-24 12:02:35.455643624 +0000 UTC m=+202.069971574" Nov 24 12:02:37 crc kubenswrapper[4930]: I1124 12:02:37.644048 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwmmc"] Nov 24 12:02:37 crc kubenswrapper[4930]: I1124 12:02:37.644637 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zwmmc" podUID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerName="registry-server" containerID="cri-o://2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80" gracePeriod=2 Nov 24 12:02:37 crc kubenswrapper[4930]: I1124 12:02:37.971920 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.022463 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pfsw\" (UniqueName: \"kubernetes.io/projected/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-kube-api-access-4pfsw\") pod \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.022587 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-utilities\") pod \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.022666 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-catalog-content\") pod \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\" (UID: \"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739\") " Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.023681 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-utilities" (OuterVolumeSpecName: "utilities") pod "c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" (UID: "c0b5649e-ae9b-4e9d-8db9-5b1129ae5739"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.029239 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-kube-api-access-4pfsw" (OuterVolumeSpecName: "kube-api-access-4pfsw") pod "c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" (UID: "c0b5649e-ae9b-4e9d-8db9-5b1129ae5739"). InnerVolumeSpecName "kube-api-access-4pfsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.121135 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" (UID: "c0b5649e-ae9b-4e9d-8db9-5b1129ae5739"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.123847 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.123886 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pfsw\" (UniqueName: \"kubernetes.io/projected/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-kube-api-access-4pfsw\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.123898 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.425737 4930 generic.go:334] "Generic (PLEG): container finished" podID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerID="2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80" exitCode=0 Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.425918 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwmmc" event={"ID":"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739","Type":"ContainerDied","Data":"2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80"} Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.426039 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwmmc" event={"ID":"c0b5649e-ae9b-4e9d-8db9-5b1129ae5739","Type":"ContainerDied","Data":"2c7932c93efe9dce58d4915acbf06b4c6343e5a7e15aadb6041dae7881a67bf0"} Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.425925 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwmmc" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.426127 4930 scope.go:117] "RemoveContainer" containerID="2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.447748 4930 scope.go:117] "RemoveContainer" containerID="d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.477277 4930 scope.go:117] "RemoveContainer" containerID="740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.477443 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwmmc"] Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.479161 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zwmmc"] Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.521745 4930 scope.go:117] "RemoveContainer" containerID="2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80" Nov 24 12:02:38 crc kubenswrapper[4930]: E1124 12:02:38.525103 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80\": container with ID starting with 2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80 not found: ID does not exist" containerID="2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.525140 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80"} err="failed to get container status \"2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80\": rpc error: code = NotFound desc = could not find container \"2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80\": container with ID starting with 2082684d069ac8a0c78bbf0a00fc0fffbb4e90d0a07209f7967b9005b91e9c80 not found: ID does not exist" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.525182 4930 scope.go:117] "RemoveContainer" containerID="d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27" Nov 24 12:02:38 crc kubenswrapper[4930]: E1124 12:02:38.525945 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27\": container with ID starting with d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27 not found: ID does not exist" containerID="d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.525991 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27"} err="failed to get container status \"d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27\": rpc error: code = NotFound desc = could not find container \"d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27\": container with ID starting with d5a042127bf8da4c02bc9ffd40cd74c6dafe18ede5c6716d981f3c9ccee67e27 not found: ID does not exist" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.526018 4930 scope.go:117] "RemoveContainer" containerID="740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454" Nov 24 12:02:38 crc kubenswrapper[4930]: E1124 12:02:38.531108 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454\": container with ID starting with 740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454 not found: ID does not exist" containerID="740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454" Nov 24 12:02:38 crc kubenswrapper[4930]: I1124 12:02:38.531185 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454"} err="failed to get container status \"740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454\": rpc error: code = NotFound desc = could not find container \"740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454\": container with ID starting with 740c7d02145b29a1d60ae3b1b6acd45bb35f25713ce855eb362103f053062454 not found: ID does not exist" Nov 24 12:02:40 crc kubenswrapper[4930]: I1124 12:02:40.091009 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" path="/var/lib/kubelet/pods/c0b5649e-ae9b-4e9d-8db9-5b1129ae5739/volumes" Nov 24 12:02:40 crc kubenswrapper[4930]: I1124 12:02:40.912602 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:02:40 crc kubenswrapper[4930]: I1124 12:02:40.912931 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:02:40 crc kubenswrapper[4930]: I1124 12:02:40.962340 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:02:41 crc kubenswrapper[4930]: I1124 12:02:41.238823 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:02:41 crc kubenswrapper[4930]: I1124 12:02:41.238864 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:02:41 crc kubenswrapper[4930]: I1124 12:02:41.288631 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:02:41 crc kubenswrapper[4930]: I1124 12:02:41.483388 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:02:41 crc kubenswrapper[4930]: I1124 12:02:41.490059 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:02:42 crc kubenswrapper[4930]: I1124 12:02:42.828197 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:02:42 crc kubenswrapper[4930]: I1124 12:02:42.828255 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:02:42 crc kubenswrapper[4930]: I1124 12:02:42.838530 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k4kdl"] Nov 24 12:02:42 crc kubenswrapper[4930]: I1124 12:02:42.874332 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.274338 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.452895 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k4kdl" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerName="registry-server" containerID="cri-o://5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80" gracePeriod=2 Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.497210 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.754336 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.890621 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-catalog-content\") pod \"8b72afae-5c1d-429f-98b7-27368332e3b1\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.890700 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-utilities\") pod \"8b72afae-5c1d-429f-98b7-27368332e3b1\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.890762 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtnbv\" (UniqueName: \"kubernetes.io/projected/8b72afae-5c1d-429f-98b7-27368332e3b1-kube-api-access-mtnbv\") pod \"8b72afae-5c1d-429f-98b7-27368332e3b1\" (UID: \"8b72afae-5c1d-429f-98b7-27368332e3b1\") " Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.892920 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-utilities" (OuterVolumeSpecName: "utilities") pod "8b72afae-5c1d-429f-98b7-27368332e3b1" (UID: "8b72afae-5c1d-429f-98b7-27368332e3b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.899024 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b72afae-5c1d-429f-98b7-27368332e3b1-kube-api-access-mtnbv" (OuterVolumeSpecName: "kube-api-access-mtnbv") pod "8b72afae-5c1d-429f-98b7-27368332e3b1" (UID: "8b72afae-5c1d-429f-98b7-27368332e3b1"). InnerVolumeSpecName "kube-api-access-mtnbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.945811 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b72afae-5c1d-429f-98b7-27368332e3b1" (UID: "8b72afae-5c1d-429f-98b7-27368332e3b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.992039 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtnbv\" (UniqueName: \"kubernetes.io/projected/8b72afae-5c1d-429f-98b7-27368332e3b1-kube-api-access-mtnbv\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.992081 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:43 crc kubenswrapper[4930]: I1124 12:02:43.992090 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b72afae-5c1d-429f-98b7-27368332e3b1-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.461281 4930 generic.go:334] "Generic (PLEG): container finished" podID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerID="5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80" exitCode=0 Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.461359 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4kdl" Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.461396 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4kdl" event={"ID":"8b72afae-5c1d-429f-98b7-27368332e3b1","Type":"ContainerDied","Data":"5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80"} Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.461763 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4kdl" event={"ID":"8b72afae-5c1d-429f-98b7-27368332e3b1","Type":"ContainerDied","Data":"4cddb3468c8b0d0d74565729b748df0bbeb993a9b6fe3306d83d47c8c866c814"} Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.461786 4930 scope.go:117] "RemoveContainer" containerID="5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80" Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.480613 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k4kdl"] Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.482717 4930 scope.go:117] "RemoveContainer" containerID="b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8" Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.492111 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k4kdl"] Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.498600 4930 scope.go:117] "RemoveContainer" containerID="8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011" Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.513179 4930 scope.go:117] "RemoveContainer" containerID="5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80" Nov 24 12:02:44 crc kubenswrapper[4930]: E1124 12:02:44.513600 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80\": container with ID starting with 5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80 not found: ID does not exist" containerID="5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80" Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.513663 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80"} err="failed to get container status \"5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80\": rpc error: code = NotFound desc = could not find container \"5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80\": container with ID starting with 5f81b9c10e1711e1a0622ec76e2d720fcbab8303d00e059a40f1d09fa6f6dd80 not found: ID does not exist" Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.513691 4930 scope.go:117] "RemoveContainer" containerID="b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8" Nov 24 12:02:44 crc kubenswrapper[4930]: E1124 12:02:44.513986 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8\": container with ID starting with b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8 not found: ID does not exist" containerID="b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8" Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.514059 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8"} err="failed to get container status \"b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8\": rpc error: code = NotFound desc = could not find container \"b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8\": container with ID starting with b10cf2301ea398ce03332fc3df22c0d255a315b47b26d34b0c0e29708cf002f8 not found: ID does not exist" Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.514093 4930 scope.go:117] "RemoveContainer" containerID="8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011" Nov 24 12:02:44 crc kubenswrapper[4930]: E1124 12:02:44.514475 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011\": container with ID starting with 8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011 not found: ID does not exist" containerID="8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011" Nov 24 12:02:44 crc kubenswrapper[4930]: I1124 12:02:44.514505 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011"} err="failed to get container status \"8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011\": rpc error: code = NotFound desc = could not find container \"8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011\": container with ID starting with 8663d65fb537adf6f08be4ce94a7b0ca34f8d40da6f22b7485f61c79aa751011 not found: ID does not exist" Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.239923 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwf54"] Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.240302 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nwf54" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerName="registry-server" containerID="cri-o://a723c1c3fbc56316b3f76c9b5a3c6d0448cfe3e9f394d3370cb53056a09766de" gracePeriod=2 Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.470909 4930 generic.go:334] "Generic (PLEG): container finished" podID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerID="a723c1c3fbc56316b3f76c9b5a3c6d0448cfe3e9f394d3370cb53056a09766de" exitCode=0 Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.470976 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwf54" event={"ID":"e23c1567-d78e-4ffe-b601-6e4c70486428","Type":"ContainerDied","Data":"a723c1c3fbc56316b3f76c9b5a3c6d0448cfe3e9f394d3370cb53056a09766de"} Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.600424 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.610203 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-catalog-content\") pod \"e23c1567-d78e-4ffe-b601-6e4c70486428\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.610286 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-utilities\") pod \"e23c1567-d78e-4ffe-b601-6e4c70486428\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.610374 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4tp5\" (UniqueName: \"kubernetes.io/projected/e23c1567-d78e-4ffe-b601-6e4c70486428-kube-api-access-v4tp5\") pod \"e23c1567-d78e-4ffe-b601-6e4c70486428\" (UID: \"e23c1567-d78e-4ffe-b601-6e4c70486428\") " Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.612489 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-utilities" (OuterVolumeSpecName: "utilities") pod "e23c1567-d78e-4ffe-b601-6e4c70486428" (UID: "e23c1567-d78e-4ffe-b601-6e4c70486428"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.615754 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23c1567-d78e-4ffe-b601-6e4c70486428-kube-api-access-v4tp5" (OuterVolumeSpecName: "kube-api-access-v4tp5") pod "e23c1567-d78e-4ffe-b601-6e4c70486428" (UID: "e23c1567-d78e-4ffe-b601-6e4c70486428"). InnerVolumeSpecName "kube-api-access-v4tp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.615889 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.633844 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e23c1567-d78e-4ffe-b601-6e4c70486428" (UID: "e23c1567-d78e-4ffe-b601-6e4c70486428"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.717474 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4tp5\" (UniqueName: \"kubernetes.io/projected/e23c1567-d78e-4ffe-b601-6e4c70486428-kube-api-access-v4tp5\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:45 crc kubenswrapper[4930]: I1124 12:02:45.717549 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e23c1567-d78e-4ffe-b601-6e4c70486428-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:46 crc kubenswrapper[4930]: I1124 12:02:46.101284 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" path="/var/lib/kubelet/pods/8b72afae-5c1d-429f-98b7-27368332e3b1/volumes" Nov 24 12:02:46 crc kubenswrapper[4930]: I1124 12:02:46.479589 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwf54" Nov 24 12:02:46 crc kubenswrapper[4930]: I1124 12:02:46.480092 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwf54" event={"ID":"e23c1567-d78e-4ffe-b601-6e4c70486428","Type":"ContainerDied","Data":"ab87a749c549f4739fe04b8989753d9f2968aff1edecbb1aa790995ef7ff7385"} Nov 24 12:02:46 crc kubenswrapper[4930]: I1124 12:02:46.480129 4930 scope.go:117] "RemoveContainer" containerID="a723c1c3fbc56316b3f76c9b5a3c6d0448cfe3e9f394d3370cb53056a09766de" Nov 24 12:02:46 crc kubenswrapper[4930]: I1124 12:02:46.503347 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwf54"] Nov 24 12:02:46 crc kubenswrapper[4930]: I1124 12:02:46.508181 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwf54"] Nov 24 12:02:46 crc kubenswrapper[4930]: I1124 12:02:46.508960 4930 scope.go:117] "RemoveContainer" containerID="855e66e2d730fdb02acb93090196c55e61270e1692457655a4049cc2276769f3" Nov 24 12:02:46 crc kubenswrapper[4930]: I1124 12:02:46.521461 4930 scope.go:117] "RemoveContainer" containerID="f4efd9c6dd8c4085d595a954cd74c0dac976eaa7b1280576db9c59a50f85b130" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.280737 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" podUID="f956bae9-4db9-4698-bb42-5b6c872d8b35" containerName="oauth-openshift" containerID="cri-o://6eaeb6c045deead5ede4890a79784d229321b454bd2312f8a875f734705a06ec" gracePeriod=15 Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.489261 4930 generic.go:334] "Generic (PLEG): container finished" podID="f956bae9-4db9-4698-bb42-5b6c872d8b35" containerID="6eaeb6c045deead5ede4890a79784d229321b454bd2312f8a875f734705a06ec" exitCode=0 Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.489299 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" event={"ID":"f956bae9-4db9-4698-bb42-5b6c872d8b35","Type":"ContainerDied","Data":"6eaeb6c045deead5ede4890a79784d229321b454bd2312f8a875f734705a06ec"} Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.618451 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.760183 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-serving-cert\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.760749 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-login\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.760801 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-dir\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.760891 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.760937 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-router-certs\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.760957 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-provider-selection\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.761007 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-ocp-branding-template\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.761026 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-error\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.761760 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-cliconfig\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.761797 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-session\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.761820 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-policies\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.761839 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-trusted-ca-bundle\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.761888 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb2ng\" (UniqueName: \"kubernetes.io/projected/f956bae9-4db9-4698-bb42-5b6c872d8b35-kube-api-access-qb2ng\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.761909 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-idp-0-file-data\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.761937 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-service-ca\") pod \"f956bae9-4db9-4698-bb42-5b6c872d8b35\" (UID: \"f956bae9-4db9-4698-bb42-5b6c872d8b35\") " Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.762210 4930 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.762290 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.762553 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.762661 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.762946 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.776670 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.781201 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.788311 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.788713 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f956bae9-4db9-4698-bb42-5b6c872d8b35-kube-api-access-qb2ng" (OuterVolumeSpecName: "kube-api-access-qb2ng") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "kube-api-access-qb2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.790680 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.791790 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.792031 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.792598 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.792762 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f956bae9-4db9-4698-bb42-5b6c872d8b35" (UID: "f956bae9-4db9-4698-bb42-5b6c872d8b35"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863420 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863457 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863471 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863483 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863494 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863504 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863514 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863526 4930 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863549 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863559 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb2ng\" (UniqueName: \"kubernetes.io/projected/f956bae9-4db9-4698-bb42-5b6c872d8b35-kube-api-access-qb2ng\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863568 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863577 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:47 crc kubenswrapper[4930]: I1124 12:02:47.863586 4930 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f956bae9-4db9-4698-bb42-5b6c872d8b35-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:48 crc kubenswrapper[4930]: I1124 12:02:48.090987 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" path="/var/lib/kubelet/pods/e23c1567-d78e-4ffe-b601-6e4c70486428/volumes" Nov 24 12:02:48 crc kubenswrapper[4930]: I1124 12:02:48.495234 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" event={"ID":"f956bae9-4db9-4698-bb42-5b6c872d8b35","Type":"ContainerDied","Data":"2dbb21355e73a3c60f9b57a4f4c4c0f0b44ef1ee80f33c0ea34089f6cec1110c"} Nov 24 12:02:48 crc kubenswrapper[4930]: I1124 12:02:48.495822 4930 scope.go:117] "RemoveContainer" containerID="6eaeb6c045deead5ede4890a79784d229321b454bd2312f8a875f734705a06ec" Nov 24 12:02:48 crc kubenswrapper[4930]: I1124 12:02:48.495411 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9d78l" Nov 24 12:02:48 crc kubenswrapper[4930]: I1124 12:02:48.524882 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9d78l"] Nov 24 12:02:48 crc kubenswrapper[4930]: I1124 12:02:48.530597 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9d78l"] Nov 24 12:02:50 crc kubenswrapper[4930]: I1124 12:02:50.093993 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f956bae9-4db9-4698-bb42-5b6c872d8b35" path="/var/lib/kubelet/pods/f956bae9-4db9-4698-bb42-5b6c872d8b35/volumes" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.747608 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f96647944-k8d8k"] Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.747841 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerName="extract-content" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.747857 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerName="extract-content" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.747887 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.747895 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.747906 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f956bae9-4db9-4698-bb42-5b6c872d8b35" containerName="oauth-openshift" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.747913 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f956bae9-4db9-4698-bb42-5b6c872d8b35" containerName="oauth-openshift" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.747925 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.747932 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.747940 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerName="extract-utilities" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.747945 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerName="extract-utilities" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.747953 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerName="extract-content" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.747959 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerName="extract-content" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.747970 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea65b02d-9e8a-4089-b867-d1c7cfb70df5" containerName="collect-profiles" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.747975 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea65b02d-9e8a-4089-b867-d1c7cfb70df5" containerName="collect-profiles" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.747982 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerName="extract-utilities" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.747988 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerName="extract-utilities" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.747994 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748001 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.748012 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe296064-195b-42d0-a0a1-8012587b8e04" containerName="extract-utilities" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748018 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe296064-195b-42d0-a0a1-8012587b8e04" containerName="extract-utilities" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.748027 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a171f6-a50f-4a41-bd81-cab660b6f347" containerName="pruner" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748033 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a171f6-a50f-4a41-bd81-cab660b6f347" containerName="pruner" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.748040 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerName="extract-content" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748046 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerName="extract-content" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.748054 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394cdc2d-42ad-46e4-9afb-cb4158ecc3a3" containerName="pruner" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748059 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="394cdc2d-42ad-46e4-9afb-cb4158ecc3a3" containerName="pruner" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.748069 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe296064-195b-42d0-a0a1-8012587b8e04" containerName="extract-content" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748076 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe296064-195b-42d0-a0a1-8012587b8e04" containerName="extract-content" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.748083 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerName="extract-utilities" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748089 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerName="extract-utilities" Nov 24 12:02:52 crc kubenswrapper[4930]: E1124 12:02:52.748097 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe296064-195b-42d0-a0a1-8012587b8e04" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748103 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe296064-195b-42d0-a0a1-8012587b8e04" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748191 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a171f6-a50f-4a41-bd81-cab660b6f347" containerName="pruner" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748201 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b72afae-5c1d-429f-98b7-27368332e3b1" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748210 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="f956bae9-4db9-4698-bb42-5b6c872d8b35" containerName="oauth-openshift" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748219 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea65b02d-9e8a-4089-b867-d1c7cfb70df5" containerName="collect-profiles" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748228 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="394cdc2d-42ad-46e4-9afb-cb4158ecc3a3" containerName="pruner" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748235 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe296064-195b-42d0-a0a1-8012587b8e04" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748240 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b5649e-ae9b-4e9d-8db9-5b1129ae5739" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748248 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23c1567-d78e-4ffe-b601-6e4c70486428" containerName="registry-server" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.748662 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.750705 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.752442 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.753517 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.755809 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.756066 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.757533 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.757673 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.758592 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.758665 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.758610 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.758931 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.762484 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.765586 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.766526 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f96647944-k8d8k"] Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.772815 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.775801 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842611 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-audit-dir\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842673 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842706 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842742 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842769 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-audit-policies\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842797 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842845 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842873 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842904 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842928 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842971 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.842998 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5xh8\" (UniqueName: \"kubernetes.io/projected/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-kube-api-access-b5xh8\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.843026 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.843054 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.944082 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.944664 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5xh8\" (UniqueName: \"kubernetes.io/projected/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-kube-api-access-b5xh8\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.944833 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.944898 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945081 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945232 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-audit-dir\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945272 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945301 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945356 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945391 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-audit-policies\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945415 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945479 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945514 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945565 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.945591 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.946532 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-audit-policies\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.946595 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-audit-dir\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.947104 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.947150 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.950369 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.951049 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.951820 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.951897 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.954233 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.954444 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.954945 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.956133 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:52 crc kubenswrapper[4930]: I1124 12:02:52.964316 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5xh8\" (UniqueName: \"kubernetes.io/projected/d2ae5ab1-e699-4c82-9b26-f0e998a3746e-kube-api-access-b5xh8\") pod \"oauth-openshift-6f96647944-k8d8k\" (UID: \"d2ae5ab1-e699-4c82-9b26-f0e998a3746e\") " pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:53 crc kubenswrapper[4930]: I1124 12:02:53.072805 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:53 crc kubenswrapper[4930]: I1124 12:02:53.246883 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f96647944-k8d8k"] Nov 24 12:02:53 crc kubenswrapper[4930]: I1124 12:02:53.525410 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" event={"ID":"d2ae5ab1-e699-4c82-9b26-f0e998a3746e","Type":"ContainerStarted","Data":"df7a5fcf55a716eba771de0514834a885e03aa18c68937215c5bc33bb1d899d6"} Nov 24 12:02:53 crc kubenswrapper[4930]: I1124 12:02:53.525758 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:02:53 crc kubenswrapper[4930]: I1124 12:02:53.525770 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" event={"ID":"d2ae5ab1-e699-4c82-9b26-f0e998a3746e","Type":"ContainerStarted","Data":"914cae47446555f1424ba23f85a940364d8a2d328f95e3e392bee5be8518f646"} Nov 24 12:02:53 crc kubenswrapper[4930]: I1124 12:02:53.527466 4930 patch_prober.go:28] interesting pod/oauth-openshift-6f96647944-k8d8k container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" start-of-body= Nov 24 12:02:53 crc kubenswrapper[4930]: I1124 12:02:53.527518 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" podUID="d2ae5ab1-e699-4c82-9b26-f0e998a3746e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" Nov 24 12:02:53 crc kubenswrapper[4930]: I1124 12:02:53.545399 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" podStartSLOduration=31.545384534 podStartE2EDuration="31.545384534s" podCreationTimestamp="2025-11-24 12:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:02:53.543049492 +0000 UTC m=+220.157377442" watchObservedRunningTime="2025-11-24 12:02:53.545384534 +0000 UTC m=+220.159712484" Nov 24 12:02:54 crc kubenswrapper[4930]: I1124 12:02:54.536844 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f96647944-k8d8k" Nov 24 12:03:01 crc kubenswrapper[4930]: I1124 12:03:01.809399 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:03:01 crc kubenswrapper[4930]: I1124 12:03:01.810916 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:03:01 crc kubenswrapper[4930]: I1124 12:03:01.811024 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:03:01 crc kubenswrapper[4930]: I1124 12:03:01.811626 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:03:01 crc kubenswrapper[4930]: I1124 12:03:01.811788 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103" gracePeriod=600 Nov 24 12:03:02 crc kubenswrapper[4930]: I1124 12:03:02.576154 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103" exitCode=0 Nov 24 12:03:02 crc kubenswrapper[4930]: I1124 12:03:02.576227 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103"} Nov 24 12:03:02 crc kubenswrapper[4930]: I1124 12:03:02.576555 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"3991dbeaed794b3c06979f1cfd6d6accfca0d3321783365d631089f10138ad78"} Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.235216 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tv4h"] Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.236460 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4tv4h" podUID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerName="registry-server" containerID="cri-o://75ef6df97721fabe29828e07c3051ca72e380a0b2df4beb88797c9113fe51067" gracePeriod=30 Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.243030 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ss26h"] Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.243355 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ss26h" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerName="registry-server" containerID="cri-o://cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d" gracePeriod=30 Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.254699 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh578"] Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.258749 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" podUID="f8908bf3-e171-4859-80c7-baa64ca6e11c" containerName="marketplace-operator" containerID="cri-o://0e9a6a67a4f154aebce8a5b30f31f1590f3a0029827e843608b1f14ee9054fe4" gracePeriod=30 Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.269708 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vggrt"] Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.270010 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vggrt" podUID="000050ff-5ba3-4660-be21-00afb861c946" containerName="registry-server" containerID="cri-o://5228f2da0d0d57ecdb94cdcc8d4463fcdd425e51f7dc1420b22c50d7ba9cc6f2" gracePeriod=30 Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.281234 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7td4t"] Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.281598 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7td4t" podUID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerName="registry-server" containerID="cri-o://4303867d1df45887dd04ef0113c40b7ec05c4952cc50b40cd33f398b1866669a" gracePeriod=30 Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.287161 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vn8d4"] Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.288006 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.316385 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vn8d4"] Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.469342 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqd55\" (UniqueName: \"kubernetes.io/projected/6adfccee-6f09-45b8-b8b9-4cd6fe524680-kube-api-access-qqd55\") pod \"marketplace-operator-79b997595-vn8d4\" (UID: \"6adfccee-6f09-45b8-b8b9-4cd6fe524680\") " pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.469428 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6adfccee-6f09-45b8-b8b9-4cd6fe524680-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vn8d4\" (UID: \"6adfccee-6f09-45b8-b8b9-4cd6fe524680\") " pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.469474 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6adfccee-6f09-45b8-b8b9-4cd6fe524680-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vn8d4\" (UID: \"6adfccee-6f09-45b8-b8b9-4cd6fe524680\") " pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.570512 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6adfccee-6f09-45b8-b8b9-4cd6fe524680-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vn8d4\" (UID: \"6adfccee-6f09-45b8-b8b9-4cd6fe524680\") " pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.570911 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqd55\" (UniqueName: \"kubernetes.io/projected/6adfccee-6f09-45b8-b8b9-4cd6fe524680-kube-api-access-qqd55\") pod \"marketplace-operator-79b997595-vn8d4\" (UID: \"6adfccee-6f09-45b8-b8b9-4cd6fe524680\") " pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.570966 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6adfccee-6f09-45b8-b8b9-4cd6fe524680-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vn8d4\" (UID: \"6adfccee-6f09-45b8-b8b9-4cd6fe524680\") " pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.572442 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6adfccee-6f09-45b8-b8b9-4cd6fe524680-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vn8d4\" (UID: \"6adfccee-6f09-45b8-b8b9-4cd6fe524680\") " pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.576678 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6adfccee-6f09-45b8-b8b9-4cd6fe524680-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vn8d4\" (UID: \"6adfccee-6f09-45b8-b8b9-4cd6fe524680\") " pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.588935 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqd55\" (UniqueName: \"kubernetes.io/projected/6adfccee-6f09-45b8-b8b9-4cd6fe524680-kube-api-access-qqd55\") pod \"marketplace-operator-79b997595-vn8d4\" (UID: \"6adfccee-6f09-45b8-b8b9-4cd6fe524680\") " pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.717558 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.723184 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.734678 4930 generic.go:334] "Generic (PLEG): container finished" podID="000050ff-5ba3-4660-be21-00afb861c946" containerID="5228f2da0d0d57ecdb94cdcc8d4463fcdd425e51f7dc1420b22c50d7ba9cc6f2" exitCode=0 Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.734779 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vggrt" event={"ID":"000050ff-5ba3-4660-be21-00afb861c946","Type":"ContainerDied","Data":"5228f2da0d0d57ecdb94cdcc8d4463fcdd425e51f7dc1420b22c50d7ba9cc6f2"} Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.734806 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vggrt" event={"ID":"000050ff-5ba3-4660-be21-00afb861c946","Type":"ContainerDied","Data":"a873cca958d075a02cc7b44f05888534b2392e6795dbb664d41bdedb1550ae70"} Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.734816 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a873cca958d075a02cc7b44f05888534b2392e6795dbb664d41bdedb1550ae70" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.737997 4930 generic.go:334] "Generic (PLEG): container finished" podID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerID="75ef6df97721fabe29828e07c3051ca72e380a0b2df4beb88797c9113fe51067" exitCode=0 Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.738087 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tv4h" event={"ID":"bc7bba02-37bc-4786-bd0a-3b5710779d25","Type":"ContainerDied","Data":"75ef6df97721fabe29828e07c3051ca72e380a0b2df4beb88797c9113fe51067"} Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.745210 4930 generic.go:334] "Generic (PLEG): container finished" podID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerID="4303867d1df45887dd04ef0113c40b7ec05c4952cc50b40cd33f398b1866669a" exitCode=0 Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.745303 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7td4t" event={"ID":"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43","Type":"ContainerDied","Data":"4303867d1df45887dd04ef0113c40b7ec05c4952cc50b40cd33f398b1866669a"} Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.745336 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7td4t" event={"ID":"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43","Type":"ContainerDied","Data":"655f37fc6f85422579f26bd5dc46e4e719721cef962df15110f520d270dbc29a"} Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.745351 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="655f37fc6f85422579f26bd5dc46e4e719721cef962df15110f520d270dbc29a" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.748323 4930 generic.go:334] "Generic (PLEG): container finished" podID="f8908bf3-e171-4859-80c7-baa64ca6e11c" containerID="0e9a6a67a4f154aebce8a5b30f31f1590f3a0029827e843608b1f14ee9054fe4" exitCode=0 Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.748379 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" event={"ID":"f8908bf3-e171-4859-80c7-baa64ca6e11c","Type":"ContainerDied","Data":"0e9a6a67a4f154aebce8a5b30f31f1590f3a0029827e843608b1f14ee9054fe4"} Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.748417 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" event={"ID":"f8908bf3-e171-4859-80c7-baa64ca6e11c","Type":"ContainerDied","Data":"de60c5a8f637803b90e4e8a93dc81997568f37fede47336773231935da2bde6b"} Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.748432 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de60c5a8f637803b90e4e8a93dc81997568f37fede47336773231935da2bde6b" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.751435 4930 generic.go:334] "Generic (PLEG): container finished" podID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerID="cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d" exitCode=0 Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.751477 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss26h" event={"ID":"ecbd0a96-64bb-4de8-8d4d-8861e24fd414","Type":"ContainerDied","Data":"cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d"} Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.751508 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss26h" event={"ID":"ecbd0a96-64bb-4de8-8d4d-8861e24fd414","Type":"ContainerDied","Data":"4eeb8df799e3fc31a1f0870579edf3045aaa68fb3622f2f00b9523cf479be74b"} Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.751571 4930 scope.go:117] "RemoveContainer" containerID="cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.751723 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ss26h" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.765234 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.780149 4930 scope.go:117] "RemoveContainer" containerID="498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.800195 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.820658 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.834553 4930 scope.go:117] "RemoveContainer" containerID="751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.863034 4930 scope.go:117] "RemoveContainer" containerID="cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d" Nov 24 12:03:25 crc kubenswrapper[4930]: E1124 12:03:25.863854 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d\": container with ID starting with cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d not found: ID does not exist" containerID="cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.863896 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d"} err="failed to get container status \"cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d\": rpc error: code = NotFound desc = could not find container \"cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d\": container with ID starting with cea60f8725ad4e17a94ef9b76ca8b9b84fa5249f6d58ceb6398c48590d6d380d not found: ID does not exist" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.863924 4930 scope.go:117] "RemoveContainer" containerID="498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a" Nov 24 12:03:25 crc kubenswrapper[4930]: E1124 12:03:25.864378 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a\": container with ID starting with 498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a not found: ID does not exist" containerID="498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.864426 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a"} err="failed to get container status \"498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a\": rpc error: code = NotFound desc = could not find container \"498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a\": container with ID starting with 498070d1dc690930c1d382652536185aa6c27087fbee142e627faf6049d74a0a not found: ID does not exist" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.864458 4930 scope.go:117] "RemoveContainer" containerID="751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813" Nov 24 12:03:25 crc kubenswrapper[4930]: E1124 12:03:25.864859 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813\": container with ID starting with 751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813 not found: ID does not exist" containerID="751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.864897 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813"} err="failed to get container status \"751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813\": rpc error: code = NotFound desc = could not find container \"751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813\": container with ID starting with 751910811d1af7ca105c6f21b6eaefe16c315e8ac4a894601953aaabf7a33813 not found: ID does not exist" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.874058 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-utilities\") pod \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.874107 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-utilities\") pod \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.874190 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf79g\" (UniqueName: \"kubernetes.io/projected/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-kube-api-access-cf79g\") pod \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.874217 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-catalog-content\") pod \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.874243 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvb7j\" (UniqueName: \"kubernetes.io/projected/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-kube-api-access-wvb7j\") pod \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\" (UID: \"ecbd0a96-64bb-4de8-8d4d-8861e24fd414\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.874276 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-catalog-content\") pod \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\" (UID: \"d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.875257 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-utilities" (OuterVolumeSpecName: "utilities") pod "ecbd0a96-64bb-4de8-8d4d-8861e24fd414" (UID: "ecbd0a96-64bb-4de8-8d4d-8861e24fd414"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.875257 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-utilities" (OuterVolumeSpecName: "utilities") pod "d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" (UID: "d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.878625 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-kube-api-access-cf79g" (OuterVolumeSpecName: "kube-api-access-cf79g") pod "d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" (UID: "d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43"). InnerVolumeSpecName "kube-api-access-cf79g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.878771 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-kube-api-access-wvb7j" (OuterVolumeSpecName: "kube-api-access-wvb7j") pod "ecbd0a96-64bb-4de8-8d4d-8861e24fd414" (UID: "ecbd0a96-64bb-4de8-8d4d-8861e24fd414"). InnerVolumeSpecName "kube-api-access-wvb7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.935292 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecbd0a96-64bb-4de8-8d4d-8861e24fd414" (UID: "ecbd0a96-64bb-4de8-8d4d-8861e24fd414"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.944843 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vn8d4"] Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975116 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-operator-metrics\") pod \"f8908bf3-e171-4859-80c7-baa64ca6e11c\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975173 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cjm5\" (UniqueName: \"kubernetes.io/projected/f8908bf3-e171-4859-80c7-baa64ca6e11c-kube-api-access-9cjm5\") pod \"f8908bf3-e171-4859-80c7-baa64ca6e11c\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975211 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-utilities\") pod \"000050ff-5ba3-4660-be21-00afb861c946\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975253 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtsdh\" (UniqueName: \"kubernetes.io/projected/000050ff-5ba3-4660-be21-00afb861c946-kube-api-access-jtsdh\") pod \"000050ff-5ba3-4660-be21-00afb861c946\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975289 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-catalog-content\") pod \"000050ff-5ba3-4660-be21-00afb861c946\" (UID: \"000050ff-5ba3-4660-be21-00afb861c946\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975327 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-trusted-ca\") pod \"f8908bf3-e171-4859-80c7-baa64ca6e11c\" (UID: \"f8908bf3-e171-4859-80c7-baa64ca6e11c\") " Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975512 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf79g\" (UniqueName: \"kubernetes.io/projected/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-kube-api-access-cf79g\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975525 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975549 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvb7j\" (UniqueName: \"kubernetes.io/projected/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-kube-api-access-wvb7j\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975558 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbd0a96-64bb-4de8-8d4d-8861e24fd414-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.975566 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.976104 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f8908bf3-e171-4859-80c7-baa64ca6e11c" (UID: "f8908bf3-e171-4859-80c7-baa64ca6e11c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.977402 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-utilities" (OuterVolumeSpecName: "utilities") pod "000050ff-5ba3-4660-be21-00afb861c946" (UID: "000050ff-5ba3-4660-be21-00afb861c946"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.979098 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f8908bf3-e171-4859-80c7-baa64ca6e11c" (UID: "f8908bf3-e171-4859-80c7-baa64ca6e11c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.981712 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000050ff-5ba3-4660-be21-00afb861c946-kube-api-access-jtsdh" (OuterVolumeSpecName: "kube-api-access-jtsdh") pod "000050ff-5ba3-4660-be21-00afb861c946" (UID: "000050ff-5ba3-4660-be21-00afb861c946"). InnerVolumeSpecName "kube-api-access-jtsdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.981753 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8908bf3-e171-4859-80c7-baa64ca6e11c-kube-api-access-9cjm5" (OuterVolumeSpecName: "kube-api-access-9cjm5") pod "f8908bf3-e171-4859-80c7-baa64ca6e11c" (UID: "f8908bf3-e171-4859-80c7-baa64ca6e11c"). InnerVolumeSpecName "kube-api-access-9cjm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.989152 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" (UID: "d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:25 crc kubenswrapper[4930]: I1124 12:03:25.996133 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "000050ff-5ba3-4660-be21-00afb861c946" (UID: "000050ff-5ba3-4660-be21-00afb861c946"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.080441 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ss26h"] Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.080899 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtsdh\" (UniqueName: \"kubernetes.io/projected/000050ff-5ba3-4660-be21-00afb861c946-kube-api-access-jtsdh\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.080922 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.080945 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.080954 4930 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.080964 4930 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f8908bf3-e171-4859-80c7-baa64ca6e11c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.080972 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cjm5\" (UniqueName: \"kubernetes.io/projected/f8908bf3-e171-4859-80c7-baa64ca6e11c-kube-api-access-9cjm5\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.081044 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000050ff-5ba3-4660-be21-00afb861c946-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.095490 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ss26h"] Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.106445 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.283036 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-catalog-content\") pod \"bc7bba02-37bc-4786-bd0a-3b5710779d25\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.283148 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkjkh\" (UniqueName: \"kubernetes.io/projected/bc7bba02-37bc-4786-bd0a-3b5710779d25-kube-api-access-tkjkh\") pod \"bc7bba02-37bc-4786-bd0a-3b5710779d25\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.283190 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-utilities\") pod \"bc7bba02-37bc-4786-bd0a-3b5710779d25\" (UID: \"bc7bba02-37bc-4786-bd0a-3b5710779d25\") " Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.285973 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-utilities" (OuterVolumeSpecName: "utilities") pod "bc7bba02-37bc-4786-bd0a-3b5710779d25" (UID: "bc7bba02-37bc-4786-bd0a-3b5710779d25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.289520 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7bba02-37bc-4786-bd0a-3b5710779d25-kube-api-access-tkjkh" (OuterVolumeSpecName: "kube-api-access-tkjkh") pod "bc7bba02-37bc-4786-bd0a-3b5710779d25" (UID: "bc7bba02-37bc-4786-bd0a-3b5710779d25"). InnerVolumeSpecName "kube-api-access-tkjkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.338190 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc7bba02-37bc-4786-bd0a-3b5710779d25" (UID: "bc7bba02-37bc-4786-bd0a-3b5710779d25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.384568 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkjkh\" (UniqueName: \"kubernetes.io/projected/bc7bba02-37bc-4786-bd0a-3b5710779d25-kube-api-access-tkjkh\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.384604 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.384613 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7bba02-37bc-4786-bd0a-3b5710779d25-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.757207 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" event={"ID":"6adfccee-6f09-45b8-b8b9-4cd6fe524680","Type":"ContainerStarted","Data":"cc60082a9fa8791f169ea12b1d7a4b5b12b334aeb5fdf825be1b1db5619c43eb"} Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.757653 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" event={"ID":"6adfccee-6f09-45b8-b8b9-4cd6fe524680","Type":"ContainerStarted","Data":"7852bbff7312e08d98ee2523d80a426be9c7500d0df2715ac7fbf4e0e08626b4"} Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.757707 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.761110 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qh578" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.761267 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tv4h" event={"ID":"bc7bba02-37bc-4786-bd0a-3b5710779d25","Type":"ContainerDied","Data":"530d4a56f44a899b55bb08fbd65a42a1b366d3b26f4a83437bf66bb1f0eca3b8"} Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.761316 4930 scope.go:117] "RemoveContainer" containerID="75ef6df97721fabe29828e07c3051ca72e380a0b2df4beb88797c9113fe51067" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.761458 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tv4h" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.761520 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vggrt" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.761751 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7td4t" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.765042 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.777773 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vn8d4" podStartSLOduration=1.77775142 podStartE2EDuration="1.77775142s" podCreationTimestamp="2025-11-24 12:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:03:26.776744059 +0000 UTC m=+253.391072039" watchObservedRunningTime="2025-11-24 12:03:26.77775142 +0000 UTC m=+253.392079380" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.778702 4930 scope.go:117] "RemoveContainer" containerID="76e42f2fe966bdabe8e3eb1b1108708198eb2ef3bbd28e79d92d8afe867b57d9" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.794151 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vggrt"] Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.796773 4930 scope.go:117] "RemoveContainer" containerID="97d41366794a0f81e44b1003dba66e4934a1c82f7e17bf74e1f490a54f2d10c5" Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.798407 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vggrt"] Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.808898 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7td4t"] Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.826829 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7td4t"] Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.844706 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh578"] Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.847012 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qh578"] Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.860478 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tv4h"] Nov 24 12:03:26 crc kubenswrapper[4930]: I1124 12:03:26.865057 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4tv4h"] Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.455268 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wvfmp"] Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.455810 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8908bf3-e171-4859-80c7-baa64ca6e11c" containerName="marketplace-operator" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.455829 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8908bf3-e171-4859-80c7-baa64ca6e11c" containerName="marketplace-operator" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.455839 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerName="extract-content" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.455846 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerName="extract-content" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.455861 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.455868 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.455881 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000050ff-5ba3-4660-be21-00afb861c946" containerName="extract-content" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.455888 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="000050ff-5ba3-4660-be21-00afb861c946" containerName="extract-content" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.455896 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000050ff-5ba3-4660-be21-00afb861c946" containerName="extract-utilities" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.455903 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="000050ff-5ba3-4660-be21-00afb861c946" containerName="extract-utilities" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.455915 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerName="extract-utilities" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.455920 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerName="extract-utilities" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.455928 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerName="extract-content" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.455934 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerName="extract-content" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.455942 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000050ff-5ba3-4660-be21-00afb861c946" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.455948 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="000050ff-5ba3-4660-be21-00afb861c946" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.455989 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.455995 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.456003 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerName="extract-utilities" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.456009 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerName="extract-utilities" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.456016 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerName="extract-content" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.456022 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerName="extract-content" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.456031 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerName="extract-utilities" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.456037 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerName="extract-utilities" Nov 24 12:03:27 crc kubenswrapper[4930]: E1124 12:03:27.456045 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.456051 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.456158 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8908bf3-e171-4859-80c7-baa64ca6e11c" containerName="marketplace-operator" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.456170 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.456179 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.456192 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="000050ff-5ba3-4660-be21-00afb861c946" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.456202 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7bba02-37bc-4786-bd0a-3b5710779d25" containerName="registry-server" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.463394 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.464060 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvfmp"] Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.467046 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.599756 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxs7s\" (UniqueName: \"kubernetes.io/projected/ab6112e7-2923-4b99-973b-bfc18820f99a-kube-api-access-sxs7s\") pod \"redhat-marketplace-wvfmp\" (UID: \"ab6112e7-2923-4b99-973b-bfc18820f99a\") " pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.599806 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6112e7-2923-4b99-973b-bfc18820f99a-utilities\") pod \"redhat-marketplace-wvfmp\" (UID: \"ab6112e7-2923-4b99-973b-bfc18820f99a\") " pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.599928 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6112e7-2923-4b99-973b-bfc18820f99a-catalog-content\") pod \"redhat-marketplace-wvfmp\" (UID: \"ab6112e7-2923-4b99-973b-bfc18820f99a\") " pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.650212 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7z7jl"] Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.651388 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.655005 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.658216 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7z7jl"] Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.700788 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6112e7-2923-4b99-973b-bfc18820f99a-catalog-content\") pod \"redhat-marketplace-wvfmp\" (UID: \"ab6112e7-2923-4b99-973b-bfc18820f99a\") " pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.701033 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxs7s\" (UniqueName: \"kubernetes.io/projected/ab6112e7-2923-4b99-973b-bfc18820f99a-kube-api-access-sxs7s\") pod \"redhat-marketplace-wvfmp\" (UID: \"ab6112e7-2923-4b99-973b-bfc18820f99a\") " pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.701148 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6112e7-2923-4b99-973b-bfc18820f99a-utilities\") pod \"redhat-marketplace-wvfmp\" (UID: \"ab6112e7-2923-4b99-973b-bfc18820f99a\") " pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.701429 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6112e7-2923-4b99-973b-bfc18820f99a-catalog-content\") pod \"redhat-marketplace-wvfmp\" (UID: \"ab6112e7-2923-4b99-973b-bfc18820f99a\") " pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.701562 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6112e7-2923-4b99-973b-bfc18820f99a-utilities\") pod \"redhat-marketplace-wvfmp\" (UID: \"ab6112e7-2923-4b99-973b-bfc18820f99a\") " pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.730405 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxs7s\" (UniqueName: \"kubernetes.io/projected/ab6112e7-2923-4b99-973b-bfc18820f99a-kube-api-access-sxs7s\") pod \"redhat-marketplace-wvfmp\" (UID: \"ab6112e7-2923-4b99-973b-bfc18820f99a\") " pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.781803 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.802292 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-utilities\") pod \"redhat-operators-7z7jl\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.802370 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw9tk\" (UniqueName: \"kubernetes.io/projected/f1fad967-63fa-4433-8aad-deb662733831-kube-api-access-sw9tk\") pod \"redhat-operators-7z7jl\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.802437 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-catalog-content\") pod \"redhat-operators-7z7jl\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.904132 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw9tk\" (UniqueName: \"kubernetes.io/projected/f1fad967-63fa-4433-8aad-deb662733831-kube-api-access-sw9tk\") pod \"redhat-operators-7z7jl\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.904497 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-catalog-content\") pod \"redhat-operators-7z7jl\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.904623 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-utilities\") pod \"redhat-operators-7z7jl\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.907003 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-utilities\") pod \"redhat-operators-7z7jl\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.907240 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-catalog-content\") pod \"redhat-operators-7z7jl\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.927924 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw9tk\" (UniqueName: \"kubernetes.io/projected/f1fad967-63fa-4433-8aad-deb662733831-kube-api-access-sw9tk\") pod \"redhat-operators-7z7jl\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:27 crc kubenswrapper[4930]: I1124 12:03:27.965791 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvfmp"] Nov 24 12:03:27 crc kubenswrapper[4930]: W1124 12:03:27.971832 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab6112e7_2923_4b99_973b_bfc18820f99a.slice/crio-56eafca47f79328c27c52a534c70504899a72c4e63648399e94a0b66bd49d845 WatchSource:0}: Error finding container 56eafca47f79328c27c52a534c70504899a72c4e63648399e94a0b66bd49d845: Status 404 returned error can't find the container with id 56eafca47f79328c27c52a534c70504899a72c4e63648399e94a0b66bd49d845 Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.016981 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.098196 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000050ff-5ba3-4660-be21-00afb861c946" path="/var/lib/kubelet/pods/000050ff-5ba3-4660-be21-00afb861c946/volumes" Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.099234 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7bba02-37bc-4786-bd0a-3b5710779d25" path="/var/lib/kubelet/pods/bc7bba02-37bc-4786-bd0a-3b5710779d25/volumes" Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.099942 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43" path="/var/lib/kubelet/pods/d5ed8136-8b78-4a24-b8ab-b0e05e6ebb43/volumes" Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.101219 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecbd0a96-64bb-4de8-8d4d-8861e24fd414" path="/var/lib/kubelet/pods/ecbd0a96-64bb-4de8-8d4d-8861e24fd414/volumes" Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.104230 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8908bf3-e171-4859-80c7-baa64ca6e11c" path="/var/lib/kubelet/pods/f8908bf3-e171-4859-80c7-baa64ca6e11c/volumes" Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.192610 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7z7jl"] Nov 24 12:03:28 crc kubenswrapper[4930]: W1124 12:03:28.206147 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1fad967_63fa_4433_8aad_deb662733831.slice/crio-20a0be0e97d9d629815fb78651be3277b38295e4f04508d0523df6cf41045150 WatchSource:0}: Error finding container 20a0be0e97d9d629815fb78651be3277b38295e4f04508d0523df6cf41045150: Status 404 returned error can't find the container with id 20a0be0e97d9d629815fb78651be3277b38295e4f04508d0523df6cf41045150 Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.774125 4930 generic.go:334] "Generic (PLEG): container finished" podID="ab6112e7-2923-4b99-973b-bfc18820f99a" containerID="9ed0e0314eaf18fca41d369d528d2beabcb5e9e090f47e54670088d07d5ea846" exitCode=0 Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.774230 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvfmp" event={"ID":"ab6112e7-2923-4b99-973b-bfc18820f99a","Type":"ContainerDied","Data":"9ed0e0314eaf18fca41d369d528d2beabcb5e9e090f47e54670088d07d5ea846"} Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.774262 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvfmp" event={"ID":"ab6112e7-2923-4b99-973b-bfc18820f99a","Type":"ContainerStarted","Data":"56eafca47f79328c27c52a534c70504899a72c4e63648399e94a0b66bd49d845"} Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.776132 4930 generic.go:334] "Generic (PLEG): container finished" podID="f1fad967-63fa-4433-8aad-deb662733831" containerID="03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1" exitCode=0 Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.776176 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z7jl" event={"ID":"f1fad967-63fa-4433-8aad-deb662733831","Type":"ContainerDied","Data":"03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1"} Nov 24 12:03:28 crc kubenswrapper[4930]: I1124 12:03:28.776253 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z7jl" event={"ID":"f1fad967-63fa-4433-8aad-deb662733831","Type":"ContainerStarted","Data":"20a0be0e97d9d629815fb78651be3277b38295e4f04508d0523df6cf41045150"} Nov 24 12:03:29 crc kubenswrapper[4930]: I1124 12:03:29.850980 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rnnzw"] Nov 24 12:03:29 crc kubenswrapper[4930]: I1124 12:03:29.854972 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:29 crc kubenswrapper[4930]: I1124 12:03:29.857276 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 12:03:29 crc kubenswrapper[4930]: I1124 12:03:29.862145 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnnzw"] Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.033775 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9cf3ee-0338-4245-a13e-edf25c6cc87c-utilities\") pod \"certified-operators-rnnzw\" (UID: \"fb9cf3ee-0338-4245-a13e-edf25c6cc87c\") " pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.034099 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9cf3ee-0338-4245-a13e-edf25c6cc87c-catalog-content\") pod \"certified-operators-rnnzw\" (UID: \"fb9cf3ee-0338-4245-a13e-edf25c6cc87c\") " pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.034279 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp466\" (UniqueName: \"kubernetes.io/projected/fb9cf3ee-0338-4245-a13e-edf25c6cc87c-kube-api-access-dp466\") pod \"certified-operators-rnnzw\" (UID: \"fb9cf3ee-0338-4245-a13e-edf25c6cc87c\") " pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.053468 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7hj8d"] Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.054671 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.056967 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.061994 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hj8d"] Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.135127 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp466\" (UniqueName: \"kubernetes.io/projected/fb9cf3ee-0338-4245-a13e-edf25c6cc87c-kube-api-access-dp466\") pod \"certified-operators-rnnzw\" (UID: \"fb9cf3ee-0338-4245-a13e-edf25c6cc87c\") " pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.135390 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9cf3ee-0338-4245-a13e-edf25c6cc87c-utilities\") pod \"certified-operators-rnnzw\" (UID: \"fb9cf3ee-0338-4245-a13e-edf25c6cc87c\") " pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.135504 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9cf3ee-0338-4245-a13e-edf25c6cc87c-catalog-content\") pod \"certified-operators-rnnzw\" (UID: \"fb9cf3ee-0338-4245-a13e-edf25c6cc87c\") " pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.135758 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9cf3ee-0338-4245-a13e-edf25c6cc87c-utilities\") pod \"certified-operators-rnnzw\" (UID: \"fb9cf3ee-0338-4245-a13e-edf25c6cc87c\") " pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.136104 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9cf3ee-0338-4245-a13e-edf25c6cc87c-catalog-content\") pod \"certified-operators-rnnzw\" (UID: \"fb9cf3ee-0338-4245-a13e-edf25c6cc87c\") " pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.159031 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp466\" (UniqueName: \"kubernetes.io/projected/fb9cf3ee-0338-4245-a13e-edf25c6cc87c-kube-api-access-dp466\") pod \"certified-operators-rnnzw\" (UID: \"fb9cf3ee-0338-4245-a13e-edf25c6cc87c\") " pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.185932 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.237377 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2cst\" (UniqueName: \"kubernetes.io/projected/a6f3efd2-4683-4fab-9749-803e98a00cd2-kube-api-access-v2cst\") pod \"community-operators-7hj8d\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.237799 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-utilities\") pod \"community-operators-7hj8d\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.237949 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-catalog-content\") pod \"community-operators-7hj8d\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.339356 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2cst\" (UniqueName: \"kubernetes.io/projected/a6f3efd2-4683-4fab-9749-803e98a00cd2-kube-api-access-v2cst\") pod \"community-operators-7hj8d\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.339434 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-utilities\") pod \"community-operators-7hj8d\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.339482 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-catalog-content\") pod \"community-operators-7hj8d\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.340022 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-catalog-content\") pod \"community-operators-7hj8d\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.340206 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-utilities\") pod \"community-operators-7hj8d\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.358871 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2cst\" (UniqueName: \"kubernetes.io/projected/a6f3efd2-4683-4fab-9749-803e98a00cd2-kube-api-access-v2cst\") pod \"community-operators-7hj8d\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.379112 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.791409 4930 generic.go:334] "Generic (PLEG): container finished" podID="ab6112e7-2923-4b99-973b-bfc18820f99a" containerID="423f4648dd45bcfc6a42d718d3c109a574fa9575b4bce0ff284bfc1ffa3dff99" exitCode=0 Nov 24 12:03:30 crc kubenswrapper[4930]: I1124 12:03:30.791468 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvfmp" event={"ID":"ab6112e7-2923-4b99-973b-bfc18820f99a","Type":"ContainerDied","Data":"423f4648dd45bcfc6a42d718d3c109a574fa9575b4bce0ff284bfc1ffa3dff99"} Nov 24 12:03:31 crc kubenswrapper[4930]: I1124 12:03:31.150765 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hj8d"] Nov 24 12:03:31 crc kubenswrapper[4930]: W1124 12:03:31.158071 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6f3efd2_4683_4fab_9749_803e98a00cd2.slice/crio-46d5101e7f9fe4c2a2cf06c96c1915c71021eff9f5ea5cb73036249d3a6b469b WatchSource:0}: Error finding container 46d5101e7f9fe4c2a2cf06c96c1915c71021eff9f5ea5cb73036249d3a6b469b: Status 404 returned error can't find the container with id 46d5101e7f9fe4c2a2cf06c96c1915c71021eff9f5ea5cb73036249d3a6b469b Nov 24 12:03:31 crc kubenswrapper[4930]: I1124 12:03:31.205379 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnnzw"] Nov 24 12:03:31 crc kubenswrapper[4930]: I1124 12:03:31.798835 4930 generic.go:334] "Generic (PLEG): container finished" podID="fb9cf3ee-0338-4245-a13e-edf25c6cc87c" containerID="ba23fa91241cce44ac49f7a021c7b87e61d2063a670b36c81f1587a88cfd4588" exitCode=0 Nov 24 12:03:31 crc kubenswrapper[4930]: I1124 12:03:31.798909 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnzw" event={"ID":"fb9cf3ee-0338-4245-a13e-edf25c6cc87c","Type":"ContainerDied","Data":"ba23fa91241cce44ac49f7a021c7b87e61d2063a670b36c81f1587a88cfd4588"} Nov 24 12:03:31 crc kubenswrapper[4930]: I1124 12:03:31.798983 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnzw" event={"ID":"fb9cf3ee-0338-4245-a13e-edf25c6cc87c","Type":"ContainerStarted","Data":"cd82feeb8f85b0753514aedfedd9708ada7a07f0ade76cda05c68c45ae4a347a"} Nov 24 12:03:31 crc kubenswrapper[4930]: I1124 12:03:31.804131 4930 generic.go:334] "Generic (PLEG): container finished" podID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerID="d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7" exitCode=0 Nov 24 12:03:31 crc kubenswrapper[4930]: I1124 12:03:31.804211 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hj8d" event={"ID":"a6f3efd2-4683-4fab-9749-803e98a00cd2","Type":"ContainerDied","Data":"d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7"} Nov 24 12:03:31 crc kubenswrapper[4930]: I1124 12:03:31.804244 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hj8d" event={"ID":"a6f3efd2-4683-4fab-9749-803e98a00cd2","Type":"ContainerStarted","Data":"46d5101e7f9fe4c2a2cf06c96c1915c71021eff9f5ea5cb73036249d3a6b469b"} Nov 24 12:03:31 crc kubenswrapper[4930]: I1124 12:03:31.808241 4930 generic.go:334] "Generic (PLEG): container finished" podID="f1fad967-63fa-4433-8aad-deb662733831" containerID="f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c" exitCode=0 Nov 24 12:03:31 crc kubenswrapper[4930]: I1124 12:03:31.808293 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z7jl" event={"ID":"f1fad967-63fa-4433-8aad-deb662733831","Type":"ContainerDied","Data":"f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c"} Nov 24 12:03:32 crc kubenswrapper[4930]: I1124 12:03:32.826679 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvfmp" event={"ID":"ab6112e7-2923-4b99-973b-bfc18820f99a","Type":"ContainerStarted","Data":"4c544fca2613ce6e860d4ec3febdf88178245b417bde49146f3ec3d380192c5b"} Nov 24 12:03:33 crc kubenswrapper[4930]: I1124 12:03:33.835845 4930 generic.go:334] "Generic (PLEG): container finished" podID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerID="43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8" exitCode=0 Nov 24 12:03:33 crc kubenswrapper[4930]: I1124 12:03:33.836222 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hj8d" event={"ID":"a6f3efd2-4683-4fab-9749-803e98a00cd2","Type":"ContainerDied","Data":"43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8"} Nov 24 12:03:33 crc kubenswrapper[4930]: I1124 12:03:33.839842 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z7jl" event={"ID":"f1fad967-63fa-4433-8aad-deb662733831","Type":"ContainerStarted","Data":"2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139"} Nov 24 12:03:33 crc kubenswrapper[4930]: I1124 12:03:33.842678 4930 generic.go:334] "Generic (PLEG): container finished" podID="fb9cf3ee-0338-4245-a13e-edf25c6cc87c" containerID="a8cf0f69b607eb08922c24d6f6bc0e60d05693eaea890b4f177cccac71ed4588" exitCode=0 Nov 24 12:03:33 crc kubenswrapper[4930]: I1124 12:03:33.843619 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnzw" event={"ID":"fb9cf3ee-0338-4245-a13e-edf25c6cc87c","Type":"ContainerDied","Data":"a8cf0f69b607eb08922c24d6f6bc0e60d05693eaea890b4f177cccac71ed4588"} Nov 24 12:03:33 crc kubenswrapper[4930]: I1124 12:03:33.860280 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wvfmp" podStartSLOduration=3.121180435 podStartE2EDuration="6.860262835s" podCreationTimestamp="2025-11-24 12:03:27 +0000 UTC" firstStartedPulling="2025-11-24 12:03:28.775726417 +0000 UTC m=+255.390054367" lastFinishedPulling="2025-11-24 12:03:32.514808817 +0000 UTC m=+259.129136767" observedRunningTime="2025-11-24 12:03:32.855620007 +0000 UTC m=+259.469947957" watchObservedRunningTime="2025-11-24 12:03:33.860262835 +0000 UTC m=+260.474590785" Nov 24 12:03:33 crc kubenswrapper[4930]: I1124 12:03:33.894817 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7z7jl" podStartSLOduration=2.902798593 podStartE2EDuration="6.894796155s" podCreationTimestamp="2025-11-24 12:03:27 +0000 UTC" firstStartedPulling="2025-11-24 12:03:28.777371488 +0000 UTC m=+255.391699438" lastFinishedPulling="2025-11-24 12:03:32.76936905 +0000 UTC m=+259.383697000" observedRunningTime="2025-11-24 12:03:33.89398709 +0000 UTC m=+260.508315040" watchObservedRunningTime="2025-11-24 12:03:33.894796155 +0000 UTC m=+260.509124105" Nov 24 12:03:35 crc kubenswrapper[4930]: I1124 12:03:35.862118 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnzw" event={"ID":"fb9cf3ee-0338-4245-a13e-edf25c6cc87c","Type":"ContainerStarted","Data":"54123ca90fef6f1965ce085d33d2b90a56855239ecdf3cd60c0f20a5b4655c31"} Nov 24 12:03:35 crc kubenswrapper[4930]: I1124 12:03:35.865190 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hj8d" event={"ID":"a6f3efd2-4683-4fab-9749-803e98a00cd2","Type":"ContainerStarted","Data":"f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0"} Nov 24 12:03:35 crc kubenswrapper[4930]: I1124 12:03:35.885819 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rnnzw" podStartSLOduration=3.66660197 podStartE2EDuration="6.885802061s" podCreationTimestamp="2025-11-24 12:03:29 +0000 UTC" firstStartedPulling="2025-11-24 12:03:31.800373238 +0000 UTC m=+258.414701188" lastFinishedPulling="2025-11-24 12:03:35.019573319 +0000 UTC m=+261.633901279" observedRunningTime="2025-11-24 12:03:35.884918335 +0000 UTC m=+262.499246285" watchObservedRunningTime="2025-11-24 12:03:35.885802061 +0000 UTC m=+262.500130011" Nov 24 12:03:35 crc kubenswrapper[4930]: I1124 12:03:35.908927 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7hj8d" podStartSLOduration=2.664389948 podStartE2EDuration="5.908906117s" podCreationTimestamp="2025-11-24 12:03:30 +0000 UTC" firstStartedPulling="2025-11-24 12:03:31.80566704 +0000 UTC m=+258.419994990" lastFinishedPulling="2025-11-24 12:03:35.050183209 +0000 UTC m=+261.664511159" observedRunningTime="2025-11-24 12:03:35.904891137 +0000 UTC m=+262.519219097" watchObservedRunningTime="2025-11-24 12:03:35.908906117 +0000 UTC m=+262.523234067" Nov 24 12:03:37 crc kubenswrapper[4930]: I1124 12:03:37.782595 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:37 crc kubenswrapper[4930]: I1124 12:03:37.782877 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:37 crc kubenswrapper[4930]: I1124 12:03:37.836647 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:37 crc kubenswrapper[4930]: I1124 12:03:37.922095 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wvfmp" Nov 24 12:03:38 crc kubenswrapper[4930]: I1124 12:03:38.017723 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:38 crc kubenswrapper[4930]: I1124 12:03:38.017798 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:39 crc kubenswrapper[4930]: I1124 12:03:39.057572 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7z7jl" podUID="f1fad967-63fa-4433-8aad-deb662733831" containerName="registry-server" probeResult="failure" output=< Nov 24 12:03:39 crc kubenswrapper[4930]: timeout: failed to connect service ":50051" within 1s Nov 24 12:03:39 crc kubenswrapper[4930]: > Nov 24 12:03:40 crc kubenswrapper[4930]: I1124 12:03:40.186806 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:40 crc kubenswrapper[4930]: I1124 12:03:40.186875 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:40 crc kubenswrapper[4930]: I1124 12:03:40.229818 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:40 crc kubenswrapper[4930]: I1124 12:03:40.379481 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:40 crc kubenswrapper[4930]: I1124 12:03:40.379529 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:40 crc kubenswrapper[4930]: I1124 12:03:40.418247 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:40 crc kubenswrapper[4930]: I1124 12:03:40.938615 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:03:40 crc kubenswrapper[4930]: I1124 12:03:40.950625 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rnnzw" Nov 24 12:03:48 crc kubenswrapper[4930]: I1124 12:03:48.052729 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:03:48 crc kubenswrapper[4930]: I1124 12:03:48.094510 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 12:05:31 crc kubenswrapper[4930]: I1124 12:05:31.809405 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:05:31 crc kubenswrapper[4930]: I1124 12:05:31.810469 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:06:01 crc kubenswrapper[4930]: I1124 12:06:01.809640 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:06:01 crc kubenswrapper[4930]: I1124 12:06:01.810691 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:06:31 crc kubenswrapper[4930]: I1124 12:06:31.809658 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:06:31 crc kubenswrapper[4930]: I1124 12:06:31.810317 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:06:31 crc kubenswrapper[4930]: I1124 12:06:31.810369 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:06:31 crc kubenswrapper[4930]: I1124 12:06:31.811096 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3991dbeaed794b3c06979f1cfd6d6accfca0d3321783365d631089f10138ad78"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:06:31 crc kubenswrapper[4930]: I1124 12:06:31.811160 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://3991dbeaed794b3c06979f1cfd6d6accfca0d3321783365d631089f10138ad78" gracePeriod=600 Nov 24 12:06:32 crc kubenswrapper[4930]: I1124 12:06:32.864434 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="3991dbeaed794b3c06979f1cfd6d6accfca0d3321783365d631089f10138ad78" exitCode=0 Nov 24 12:06:32 crc kubenswrapper[4930]: I1124 12:06:32.864518 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"3991dbeaed794b3c06979f1cfd6d6accfca0d3321783365d631089f10138ad78"} Nov 24 12:06:32 crc kubenswrapper[4930]: I1124 12:06:32.865084 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"a8f379626591aee6b54cbd3b52ff203403645d621f59c13e50ebe6f8ffb4735c"} Nov 24 12:06:32 crc kubenswrapper[4930]: I1124 12:06:32.865104 4930 scope.go:117] "RemoveContainer" containerID="a83df521fa74e7c614999be3d900508183b5e26471353b1a0117266273526103" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.401850 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2l7wc"] Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.403292 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.415359 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2l7wc"] Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.566525 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.566611 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-trusted-ca\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.566675 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-bound-sa-token\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.566738 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csxkj\" (UniqueName: \"kubernetes.io/projected/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-kube-api-access-csxkj\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.566936 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.566982 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-registry-tls\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.567103 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.567202 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-registry-certificates\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.592998 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.668265 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.668578 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-registry-tls\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.668682 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.668772 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-registry-certificates\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.668845 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-trusted-ca\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.668957 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-bound-sa-token\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.669058 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxkj\" (UniqueName: \"kubernetes.io/projected/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-kube-api-access-csxkj\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.669275 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.670633 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-trusted-ca\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.670716 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-registry-certificates\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.674220 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-registry-tls\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.674372 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.686331 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-bound-sa-token\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.686562 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csxkj\" (UniqueName: \"kubernetes.io/projected/67ebf9b9-1098-4ac1-9147-f3f04f6e3580-kube-api-access-csxkj\") pod \"image-registry-66df7c8f76-2l7wc\" (UID: \"67ebf9b9-1098-4ac1-9147-f3f04f6e3580\") " pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.722389 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:22 crc kubenswrapper[4930]: I1124 12:07:22.891701 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2l7wc"] Nov 24 12:07:22 crc kubenswrapper[4930]: W1124 12:07:22.897141 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67ebf9b9_1098_4ac1_9147_f3f04f6e3580.slice/crio-1d4e5930bf594367607badbe5df3274bcfbf143121985b8719663e6907522214 WatchSource:0}: Error finding container 1d4e5930bf594367607badbe5df3274bcfbf143121985b8719663e6907522214: Status 404 returned error can't find the container with id 1d4e5930bf594367607badbe5df3274bcfbf143121985b8719663e6907522214 Nov 24 12:07:23 crc kubenswrapper[4930]: I1124 12:07:23.646642 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" event={"ID":"67ebf9b9-1098-4ac1-9147-f3f04f6e3580","Type":"ContainerStarted","Data":"3b146533a6caacff920a65b63dae036b2da1ee369c8ae755acccf74be8fe0489"} Nov 24 12:07:23 crc kubenswrapper[4930]: I1124 12:07:23.646988 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:23 crc kubenswrapper[4930]: I1124 12:07:23.647003 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" event={"ID":"67ebf9b9-1098-4ac1-9147-f3f04f6e3580","Type":"ContainerStarted","Data":"1d4e5930bf594367607badbe5df3274bcfbf143121985b8719663e6907522214"} Nov 24 12:07:23 crc kubenswrapper[4930]: I1124 12:07:23.668908 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" podStartSLOduration=1.668889576 podStartE2EDuration="1.668889576s" podCreationTimestamp="2025-11-24 12:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:07:23.663391485 +0000 UTC m=+490.277719465" watchObservedRunningTime="2025-11-24 12:07:23.668889576 +0000 UTC m=+490.283217526" Nov 24 12:07:42 crc kubenswrapper[4930]: I1124 12:07:42.727100 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2l7wc" Nov 24 12:07:42 crc kubenswrapper[4930]: I1124 12:07:42.775748 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fpv5v"] Nov 24 12:08:07 crc kubenswrapper[4930]: I1124 12:08:07.814178 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" podUID="d6022f6c-fa48-40b0-b2c2-e74b56071b38" containerName="registry" containerID="cri-o://1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94" gracePeriod=30 Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.167201 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.270590 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-certificates\") pod \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.270718 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-tls\") pod \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.270755 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-bound-sa-token\") pod \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.270788 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-trusted-ca\") pod \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.270824 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6022f6c-fa48-40b0-b2c2-e74b56071b38-ca-trust-extracted\") pod \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.270854 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8vgz\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-kube-api-access-f8vgz\") pod \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.270903 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6022f6c-fa48-40b0-b2c2-e74b56071b38-installation-pull-secrets\") pod \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.271688 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\" (UID: \"d6022f6c-fa48-40b0-b2c2-e74b56071b38\") " Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.272004 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d6022f6c-fa48-40b0-b2c2-e74b56071b38" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.272121 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d6022f6c-fa48-40b0-b2c2-e74b56071b38" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.278281 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6022f6c-fa48-40b0-b2c2-e74b56071b38-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d6022f6c-fa48-40b0-b2c2-e74b56071b38" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.278793 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d6022f6c-fa48-40b0-b2c2-e74b56071b38" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.278879 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-kube-api-access-f8vgz" (OuterVolumeSpecName: "kube-api-access-f8vgz") pod "d6022f6c-fa48-40b0-b2c2-e74b56071b38" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38"). InnerVolumeSpecName "kube-api-access-f8vgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.279077 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d6022f6c-fa48-40b0-b2c2-e74b56071b38" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.284160 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d6022f6c-fa48-40b0-b2c2-e74b56071b38" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.288629 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6022f6c-fa48-40b0-b2c2-e74b56071b38-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d6022f6c-fa48-40b0-b2c2-e74b56071b38" (UID: "d6022f6c-fa48-40b0-b2c2-e74b56071b38"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.373089 4930 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6022f6c-fa48-40b0-b2c2-e74b56071b38-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.373135 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8vgz\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-kube-api-access-f8vgz\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.373147 4930 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6022f6c-fa48-40b0-b2c2-e74b56071b38-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.373164 4930 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.373174 4930 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.373185 4930 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6022f6c-fa48-40b0-b2c2-e74b56071b38-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.373192 4930 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6022f6c-fa48-40b0-b2c2-e74b56071b38-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.893359 4930 generic.go:334] "Generic (PLEG): container finished" podID="d6022f6c-fa48-40b0-b2c2-e74b56071b38" containerID="1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94" exitCode=0 Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.893443 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" event={"ID":"d6022f6c-fa48-40b0-b2c2-e74b56071b38","Type":"ContainerDied","Data":"1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94"} Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.893484 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" event={"ID":"d6022f6c-fa48-40b0-b2c2-e74b56071b38","Type":"ContainerDied","Data":"bed49ad382eb252938a9134f63fc52f6eab46b9017725e30a2483322bac2210c"} Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.893525 4930 scope.go:117] "RemoveContainer" containerID="1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.894323 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fpv5v" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.914345 4930 scope.go:117] "RemoveContainer" containerID="1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94" Nov 24 12:08:08 crc kubenswrapper[4930]: E1124 12:08:08.915515 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94\": container with ID starting with 1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94 not found: ID does not exist" containerID="1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.915588 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94"} err="failed to get container status \"1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94\": rpc error: code = NotFound desc = could not find container \"1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94\": container with ID starting with 1e5f8165c28a566b6f2470e789eab85fe1e1261e4ee3409de33b1ec58512bc94 not found: ID does not exist" Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.929948 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fpv5v"] Nov 24 12:08:08 crc kubenswrapper[4930]: I1124 12:08:08.934219 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fpv5v"] Nov 24 12:08:10 crc kubenswrapper[4930]: I1124 12:08:10.093631 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6022f6c-fa48-40b0-b2c2-e74b56071b38" path="/var/lib/kubelet/pods/d6022f6c-fa48-40b0-b2c2-e74b56071b38/volumes" Nov 24 12:08:14 crc kubenswrapper[4930]: I1124 12:08:14.206315 4930 scope.go:117] "RemoveContainer" containerID="1c3b1e1b11b47600d4578ab099ce80e48222786772324f4df560592193ef7fed" Nov 24 12:08:14 crc kubenswrapper[4930]: I1124 12:08:14.233926 4930 scope.go:117] "RemoveContainer" containerID="0e9a6a67a4f154aebce8a5b30f31f1590f3a0029827e843608b1f14ee9054fe4" Nov 24 12:08:14 crc kubenswrapper[4930]: I1124 12:08:14.251557 4930 scope.go:117] "RemoveContainer" containerID="b1ef8854c2b745b58b09f1bbc26a77aec4962d15a8d4df70ba0b88b59e76d186" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.697188 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-6cpnr"] Nov 24 12:08:54 crc kubenswrapper[4930]: E1124 12:08:54.698108 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6022f6c-fa48-40b0-b2c2-e74b56071b38" containerName="registry" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.698129 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6022f6c-fa48-40b0-b2c2-e74b56071b38" containerName="registry" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.698272 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6022f6c-fa48-40b0-b2c2-e74b56071b38" containerName="registry" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.698843 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-6cpnr" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.701033 4930 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vkhvc" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.702149 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.702920 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.711839 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7rggt"] Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.716363 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7rggt" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.716454 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-6cpnr"] Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.721672 4930 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xvn8v" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.733740 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-54lcm"] Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.734732 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-54lcm" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.737598 4930 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-lrctf" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.750783 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7rggt"] Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.757802 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-54lcm"] Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.845877 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqmlf\" (UniqueName: \"kubernetes.io/projected/475d077a-f4ed-4d11-9cc9-ec7b5dc365fe-kube-api-access-dqmlf\") pod \"cert-manager-5b446d88c5-7rggt\" (UID: \"475d077a-f4ed-4d11-9cc9-ec7b5dc365fe\") " pod="cert-manager/cert-manager-5b446d88c5-7rggt" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.845940 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qzk\" (UniqueName: \"kubernetes.io/projected/baaa4d3f-5068-4824-a874-eb5e484bcf5b-kube-api-access-27qzk\") pod \"cert-manager-webhook-5655c58dd6-54lcm\" (UID: \"baaa4d3f-5068-4824-a874-eb5e484bcf5b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-54lcm" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.846924 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw86z\" (UniqueName: \"kubernetes.io/projected/cbbf065d-9533-4da3-80b7-0f20e160caf4-kube-api-access-gw86z\") pod \"cert-manager-cainjector-7f985d654d-6cpnr\" (UID: \"cbbf065d-9533-4da3-80b7-0f20e160caf4\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-6cpnr" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.948011 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw86z\" (UniqueName: \"kubernetes.io/projected/cbbf065d-9533-4da3-80b7-0f20e160caf4-kube-api-access-gw86z\") pod \"cert-manager-cainjector-7f985d654d-6cpnr\" (UID: \"cbbf065d-9533-4da3-80b7-0f20e160caf4\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-6cpnr" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.948396 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqmlf\" (UniqueName: \"kubernetes.io/projected/475d077a-f4ed-4d11-9cc9-ec7b5dc365fe-kube-api-access-dqmlf\") pod \"cert-manager-5b446d88c5-7rggt\" (UID: \"475d077a-f4ed-4d11-9cc9-ec7b5dc365fe\") " pod="cert-manager/cert-manager-5b446d88c5-7rggt" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.948456 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27qzk\" (UniqueName: \"kubernetes.io/projected/baaa4d3f-5068-4824-a874-eb5e484bcf5b-kube-api-access-27qzk\") pod \"cert-manager-webhook-5655c58dd6-54lcm\" (UID: \"baaa4d3f-5068-4824-a874-eb5e484bcf5b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-54lcm" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.968833 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw86z\" (UniqueName: \"kubernetes.io/projected/cbbf065d-9533-4da3-80b7-0f20e160caf4-kube-api-access-gw86z\") pod \"cert-manager-cainjector-7f985d654d-6cpnr\" (UID: \"cbbf065d-9533-4da3-80b7-0f20e160caf4\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-6cpnr" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.968854 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27qzk\" (UniqueName: \"kubernetes.io/projected/baaa4d3f-5068-4824-a874-eb5e484bcf5b-kube-api-access-27qzk\") pod \"cert-manager-webhook-5655c58dd6-54lcm\" (UID: \"baaa4d3f-5068-4824-a874-eb5e484bcf5b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-54lcm" Nov 24 12:08:54 crc kubenswrapper[4930]: I1124 12:08:54.970643 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqmlf\" (UniqueName: \"kubernetes.io/projected/475d077a-f4ed-4d11-9cc9-ec7b5dc365fe-kube-api-access-dqmlf\") pod \"cert-manager-5b446d88c5-7rggt\" (UID: \"475d077a-f4ed-4d11-9cc9-ec7b5dc365fe\") " pod="cert-manager/cert-manager-5b446d88c5-7rggt" Nov 24 12:08:55 crc kubenswrapper[4930]: I1124 12:08:55.018733 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-6cpnr" Nov 24 12:08:55 crc kubenswrapper[4930]: I1124 12:08:55.037120 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7rggt" Nov 24 12:08:55 crc kubenswrapper[4930]: I1124 12:08:55.052094 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-54lcm" Nov 24 12:08:55 crc kubenswrapper[4930]: I1124 12:08:55.328024 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7rggt"] Nov 24 12:08:55 crc kubenswrapper[4930]: I1124 12:08:55.344488 4930 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:08:55 crc kubenswrapper[4930]: I1124 12:08:55.368885 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-54lcm"] Nov 24 12:08:55 crc kubenswrapper[4930]: W1124 12:08:55.375179 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaaa4d3f_5068_4824_a874_eb5e484bcf5b.slice/crio-40d5604c82527b82ed58407f39c4298caf8694257c705861580a0234b0de59aa WatchSource:0}: Error finding container 40d5604c82527b82ed58407f39c4298caf8694257c705861580a0234b0de59aa: Status 404 returned error can't find the container with id 40d5604c82527b82ed58407f39c4298caf8694257c705861580a0234b0de59aa Nov 24 12:08:55 crc kubenswrapper[4930]: I1124 12:08:55.484640 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-6cpnr"] Nov 24 12:08:56 crc kubenswrapper[4930]: I1124 12:08:56.233180 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-54lcm" event={"ID":"baaa4d3f-5068-4824-a874-eb5e484bcf5b","Type":"ContainerStarted","Data":"40d5604c82527b82ed58407f39c4298caf8694257c705861580a0234b0de59aa"} Nov 24 12:08:56 crc kubenswrapper[4930]: I1124 12:08:56.234861 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-6cpnr" event={"ID":"cbbf065d-9533-4da3-80b7-0f20e160caf4","Type":"ContainerStarted","Data":"31d96628a2be7f6f06a1dc0f66bea6cffed8c54583374118f55cfa2bb345d1cc"} Nov 24 12:08:56 crc kubenswrapper[4930]: I1124 12:08:56.236064 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7rggt" event={"ID":"475d077a-f4ed-4d11-9cc9-ec7b5dc365fe","Type":"ContainerStarted","Data":"ab0cb99c3ba959094bb48a9e6547ac987a394ed5597824b15d940d14ea360127"} Nov 24 12:09:01 crc kubenswrapper[4930]: I1124 12:09:01.808893 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:09:01 crc kubenswrapper[4930]: I1124 12:09:01.809308 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:09:03 crc kubenswrapper[4930]: I1124 12:09:03.689803 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b6q2v"] Nov 24 12:09:03 crc kubenswrapper[4930]: I1124 12:09:03.690196 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovn-controller" containerID="cri-o://fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc" gracePeriod=30 Nov 24 12:09:03 crc kubenswrapper[4930]: I1124 12:09:03.690346 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="sbdb" containerID="cri-o://d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335" gracePeriod=30 Nov 24 12:09:03 crc kubenswrapper[4930]: I1124 12:09:03.690383 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="nbdb" containerID="cri-o://055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e" gracePeriod=30 Nov 24 12:09:03 crc kubenswrapper[4930]: I1124 12:09:03.690419 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="northd" containerID="cri-o://38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a" gracePeriod=30 Nov 24 12:09:03 crc kubenswrapper[4930]: I1124 12:09:03.690466 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85" gracePeriod=30 Nov 24 12:09:03 crc kubenswrapper[4930]: I1124 12:09:03.690496 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="kube-rbac-proxy-node" containerID="cri-o://9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6" gracePeriod=30 Nov 24 12:09:03 crc kubenswrapper[4930]: I1124 12:09:03.690525 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovn-acl-logging" containerID="cri-o://0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda" gracePeriod=30 Nov 24 12:09:03 crc kubenswrapper[4930]: I1124 12:09:03.734430 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" containerID="cri-o://5cc9ac8563be395cd2ee4f6dad8b594527f757b07855ece812c56b6e6917654f" gracePeriod=30 Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.293455 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5lvxv_68c34ffc-f1cd-4828-b83c-22bd0c02f364/kube-multus/2.log" Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.294070 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5lvxv_68c34ffc-f1cd-4828-b83c-22bd0c02f364/kube-multus/1.log" Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.294122 4930 generic.go:334] "Generic (PLEG): container finished" podID="68c34ffc-f1cd-4828-b83c-22bd0c02f364" containerID="58dd67e4f1a6eee0dddd3efb328f11e571b324eaebb707f289abac0be5b3a1d6" exitCode=2 Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.294197 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5lvxv" event={"ID":"68c34ffc-f1cd-4828-b83c-22bd0c02f364","Type":"ContainerDied","Data":"58dd67e4f1a6eee0dddd3efb328f11e571b324eaebb707f289abac0be5b3a1d6"} Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.294260 4930 scope.go:117] "RemoveContainer" containerID="c4d1a407cd414e67456d88d791c4b910faf7c93fb816df48310882990a1bb0ec" Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.294936 4930 scope.go:117] "RemoveContainer" containerID="58dd67e4f1a6eee0dddd3efb328f11e571b324eaebb707f289abac0be5b3a1d6" Nov 24 12:09:04 crc kubenswrapper[4930]: E1124 12:09:04.295194 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5lvxv_openshift-multus(68c34ffc-f1cd-4828-b83c-22bd0c02f364)\"" pod="openshift-multus/multus-5lvxv" podUID="68c34ffc-f1cd-4828-b83c-22bd0c02f364" Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.299921 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovnkube-controller/3.log" Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.302511 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovn-acl-logging/0.log" Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.302953 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovn-controller/0.log" Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303295 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="5cc9ac8563be395cd2ee4f6dad8b594527f757b07855ece812c56b6e6917654f" exitCode=0 Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303321 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335" exitCode=0 Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303328 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e" exitCode=0 Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303335 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a" exitCode=0 Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303343 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85" exitCode=0 Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303350 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6" exitCode=0 Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303360 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda" exitCode=143 Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303370 4930 generic.go:334] "Generic (PLEG): container finished" podID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerID="fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc" exitCode=143 Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303365 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"5cc9ac8563be395cd2ee4f6dad8b594527f757b07855ece812c56b6e6917654f"} Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303408 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335"} Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303418 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e"} Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303427 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a"} Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303438 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85"} Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303447 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6"} Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303456 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda"} Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.303465 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc"} Nov 24 12:09:04 crc kubenswrapper[4930]: E1124 12:09:04.609850 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e is running failed: container process not found" containerID="055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 24 12:09:04 crc kubenswrapper[4930]: E1124 12:09:04.609937 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335 is running failed: container process not found" containerID="d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 24 12:09:04 crc kubenswrapper[4930]: E1124 12:09:04.610200 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335 is running failed: container process not found" containerID="d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 24 12:09:04 crc kubenswrapper[4930]: E1124 12:09:04.610490 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e is running failed: container process not found" containerID="055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 24 12:09:04 crc kubenswrapper[4930]: E1124 12:09:04.610681 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335 is running failed: container process not found" containerID="d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 24 12:09:04 crc kubenswrapper[4930]: E1124 12:09:04.610710 4930 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="sbdb" Nov 24 12:09:04 crc kubenswrapper[4930]: E1124 12:09:04.610816 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e is running failed: container process not found" containerID="055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 24 12:09:04 crc kubenswrapper[4930]: E1124 12:09:04.610845 4930 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="nbdb" Nov 24 12:09:04 crc kubenswrapper[4930]: I1124 12:09:04.992856 4930 scope.go:117] "RemoveContainer" containerID="56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.029485 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovn-acl-logging/0.log" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.030193 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovn-controller/0.log" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.031648 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104008 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bkqd8"] Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104241 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovn-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104254 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovn-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104263 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104269 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104276 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="nbdb" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104282 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="nbdb" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104291 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104297 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104306 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104312 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104320 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovn-acl-logging" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104326 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovn-acl-logging" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104333 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="kube-rbac-proxy-node" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104339 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="kube-rbac-proxy-node" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104346 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="northd" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104353 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="northd" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104363 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104369 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104376 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="kubecfg-setup" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104382 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="kubecfg-setup" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104392 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="sbdb" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104397 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="sbdb" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104499 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovn-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104512 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104519 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104526 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104555 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104565 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="nbdb" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104575 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="northd" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104584 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="kube-rbac-proxy-node" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104595 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104603 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104617 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovn-acl-logging" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104627 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="sbdb" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104730 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104741 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: E1124 12:09:05.104751 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.104757 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" containerName="ovnkube-controller" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.106695 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.214662 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-bin\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215235 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-systemd-units\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215269 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-kubelet\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215022 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215272 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215306 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-ovn\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215353 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215368 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-var-lib-openvswitch\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215424 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-log-socket\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215469 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215475 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215602 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215561 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-log-socket" (OuterVolumeSpecName: "log-socket") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215503 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-netns\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215768 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovn-node-metrics-cert\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.215852 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-config\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217067 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217138 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-openvswitch\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217212 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217169 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-etc-openvswitch\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217276 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9gj6\" (UniqueName: \"kubernetes.io/projected/b3159aca-5e15-4f2c-ae74-e547f4a227f7-kube-api-access-t9gj6\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217312 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217320 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-ovn-kubernetes\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217380 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217414 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-script-lib\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217436 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-netd\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217366 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217500 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-env-overrides\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.218597 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-systemd\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.218621 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-node-log\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.218645 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-slash\") pod \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\" (UID: \"b3159aca-5e15-4f2c-ae74-e547f4a227f7\") " Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.218699 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-node-log" (OuterVolumeSpecName: "node-log") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.218800 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-slash" (OuterVolumeSpecName: "host-slash") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217419 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.217992 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.218048 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.218480 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.218962 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-run-openvswitch\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.218994 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/507868a2-ceea-4a0f-a512-40d6e805f872-env-overrides\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219051 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-etc-openvswitch\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219079 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-slash\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219104 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-cni-bin\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219126 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-systemd-units\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219176 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/507868a2-ceea-4a0f-a512-40d6e805f872-ovnkube-config\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219201 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-run-netns\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219258 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-var-lib-openvswitch\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219298 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/507868a2-ceea-4a0f-a512-40d6e805f872-ovnkube-script-lib\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219326 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-run-ovn\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219349 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-log-socket\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219424 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-node-log\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219457 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219477 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-run-systemd\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219501 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/507868a2-ceea-4a0f-a512-40d6e805f872-ovn-node-metrics-cert\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219628 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-run-ovn-kubernetes\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219654 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-cni-netd\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219715 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-kubelet\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219780 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94f27\" (UniqueName: \"kubernetes.io/projected/507868a2-ceea-4a0f-a512-40d6e805f872-kube-api-access-94f27\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219868 4930 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219881 4930 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219891 4930 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219902 4930 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219913 4930 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219923 4930 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219940 4930 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219960 4930 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3159aca-5e15-4f2c-ae74-e547f4a227f7-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219977 4930 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219989 4930 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.219999 4930 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.220009 4930 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.220018 4930 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.220028 4930 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.220037 4930 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.220047 4930 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.220061 4930 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.223952 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.225767 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3159aca-5e15-4f2c-ae74-e547f4a227f7-kube-api-access-t9gj6" (OuterVolumeSpecName: "kube-api-access-t9gj6") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "kube-api-access-t9gj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.235276 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b3159aca-5e15-4f2c-ae74-e547f4a227f7" (UID: "b3159aca-5e15-4f2c-ae74-e547f4a227f7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.318082 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5lvxv_68c34ffc-f1cd-4828-b83c-22bd0c02f364/kube-multus/2.log" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.321704 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-cni-bin\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.321768 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-systemd-units\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.321804 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/507868a2-ceea-4a0f-a512-40d6e805f872-ovnkube-config\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.321823 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-run-netns\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.321856 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-var-lib-openvswitch\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.321880 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/507868a2-ceea-4a0f-a512-40d6e805f872-ovnkube-script-lib\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.321909 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-run-ovn\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.321933 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-log-socket\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.321978 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-node-log\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322017 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322047 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-run-systemd\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322079 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/507868a2-ceea-4a0f-a512-40d6e805f872-ovn-node-metrics-cert\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322102 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-run-ovn-kubernetes\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322124 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-cni-netd\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322148 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-kubelet\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322174 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94f27\" (UniqueName: \"kubernetes.io/projected/507868a2-ceea-4a0f-a512-40d6e805f872-kube-api-access-94f27\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322216 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-run-openvswitch\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322239 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/507868a2-ceea-4a0f-a512-40d6e805f872-env-overrides\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322272 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-etc-openvswitch\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322297 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-slash\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322390 4930 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3159aca-5e15-4f2c-ae74-e547f4a227f7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322449 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9gj6\" (UniqueName: \"kubernetes.io/projected/b3159aca-5e15-4f2c-ae74-e547f4a227f7-kube-api-access-t9gj6\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322468 4930 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3159aca-5e15-4f2c-ae74-e547f4a227f7-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322530 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-slash\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322682 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-cni-bin\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322735 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-run-systemd\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322755 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.322953 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-systemd-units\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.323746 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/507868a2-ceea-4a0f-a512-40d6e805f872-ovnkube-config\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.323816 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-run-netns\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.323866 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-var-lib-openvswitch\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.324275 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-run-ovn-kubernetes\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.324275 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-cni-netd\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.324361 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-run-ovn\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.324359 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-node-log\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.324376 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/507868a2-ceea-4a0f-a512-40d6e805f872-ovnkube-script-lib\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.324446 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-log-socket\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.324559 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-etc-openvswitch\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.324533 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-host-kubelet\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.324611 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/507868a2-ceea-4a0f-a512-40d6e805f872-run-openvswitch\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.325117 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/507868a2-ceea-4a0f-a512-40d6e805f872-env-overrides\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.326314 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/507868a2-ceea-4a0f-a512-40d6e805f872-ovn-node-metrics-cert\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.330602 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovn-acl-logging/0.log" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.331949 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovn-controller/0.log" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.332792 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" event={"ID":"b3159aca-5e15-4f2c-ae74-e547f4a227f7","Type":"ContainerDied","Data":"c3d8b9ead05ab679034fea6e6d838be5bf35c0ce97cca7fd53ed732a57d93b4e"} Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.332957 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b6q2v" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.343505 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94f27\" (UniqueName: \"kubernetes.io/projected/507868a2-ceea-4a0f-a512-40d6e805f872-kube-api-access-94f27\") pod \"ovnkube-node-bkqd8\" (UID: \"507868a2-ceea-4a0f-a512-40d6e805f872\") " pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.366287 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b6q2v"] Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.371075 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b6q2v"] Nov 24 12:09:05 crc kubenswrapper[4930]: I1124 12:09:05.423294 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.019501 4930 scope.go:117] "RemoveContainer" containerID="5cc9ac8563be395cd2ee4f6dad8b594527f757b07855ece812c56b6e6917654f" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.092909 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3159aca-5e15-4f2c-ae74-e547f4a227f7" path="/var/lib/kubelet/pods/b3159aca-5e15-4f2c-ae74-e547f4a227f7/volumes" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.102113 4930 scope.go:117] "RemoveContainer" containerID="56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051" Nov 24 12:09:06 crc kubenswrapper[4930]: E1124 12:09:06.102725 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\": container with ID starting with 56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051 not found: ID does not exist" containerID="56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.102784 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051"} err="failed to get container status \"56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\": rpc error: code = NotFound desc = could not find container \"56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051\": container with ID starting with 56c7e8a8a3bbe4ecf9106ebd164bf936ecc990d333dc2d7e1270e62e96a9b051 not found: ID does not exist" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.102825 4930 scope.go:117] "RemoveContainer" containerID="d458457a63a1bc0088c00d67579ce0d981280d6d1089f7a6242e0d766bbed335" Nov 24 12:09:06 crc kubenswrapper[4930]: W1124 12:09:06.181726 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod507868a2_ceea_4a0f_a512_40d6e805f872.slice/crio-2c8120947bedb0b5397ffd21223a5c0ccc7f52d9b4bfc5eb6e34cc3d1c79fa6b WatchSource:0}: Error finding container 2c8120947bedb0b5397ffd21223a5c0ccc7f52d9b4bfc5eb6e34cc3d1c79fa6b: Status 404 returned error can't find the container with id 2c8120947bedb0b5397ffd21223a5c0ccc7f52d9b4bfc5eb6e34cc3d1c79fa6b Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.188309 4930 scope.go:117] "RemoveContainer" containerID="055f8794506e7a9b38ede255fbd7595683daec88ba3138d033c02b0e7fb94e3e" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.249855 4930 scope.go:117] "RemoveContainer" containerID="38b4add4bdccf1f7fbd06a616c45efd461e3862c735a30c0a12cf01a69eb171a" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.277376 4930 scope.go:117] "RemoveContainer" containerID="a4f7373cc9b1405307689b2eb84d80a203858138b642ccccf3ae9685a4e61f85" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.315593 4930 scope.go:117] "RemoveContainer" containerID="9e76952b9be4380c2454d106be8845a8cdd62d685a87d06fe50177f99ab0dbe6" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.338833 4930 scope.go:117] "RemoveContainer" containerID="0a02d1f8725f689c552f8b3283768a8c0f1b5099f4c55520738ed41ebef49cda" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.344084 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" event={"ID":"507868a2-ceea-4a0f-a512-40d6e805f872","Type":"ContainerStarted","Data":"2c8120947bedb0b5397ffd21223a5c0ccc7f52d9b4bfc5eb6e34cc3d1c79fa6b"} Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.347249 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b6q2v_b3159aca-5e15-4f2c-ae74-e547f4a227f7/ovn-controller/0.log" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.366923 4930 scope.go:117] "RemoveContainer" containerID="fe2c4e06c04c6ac2cd6db668bdb39461c0e239e92294b1dc73066bbcd281e3dc" Nov 24 12:09:06 crc kubenswrapper[4930]: I1124 12:09:06.383837 4930 scope.go:117] "RemoveContainer" containerID="5cf8749dab3f7dcf3bda96080d7ae435640a1ddb5be0e521d2e1958e155126e6" Nov 24 12:09:07 crc kubenswrapper[4930]: I1124 12:09:07.355223 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7rggt" event={"ID":"475d077a-f4ed-4d11-9cc9-ec7b5dc365fe","Type":"ContainerStarted","Data":"1b6517cfa192e0f28c77cc0e0c39550c51a7ce517b52f8f5b3bd8cd041b88d01"} Nov 24 12:09:07 crc kubenswrapper[4930]: I1124 12:09:07.356749 4930 generic.go:334] "Generic (PLEG): container finished" podID="507868a2-ceea-4a0f-a512-40d6e805f872" containerID="f5aef929db04cf0db3dbf63bd76e712b3e09f5ecd6b909233aec76ecd0b53e78" exitCode=0 Nov 24 12:09:07 crc kubenswrapper[4930]: I1124 12:09:07.356812 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" event={"ID":"507868a2-ceea-4a0f-a512-40d6e805f872","Type":"ContainerDied","Data":"f5aef929db04cf0db3dbf63bd76e712b3e09f5ecd6b909233aec76ecd0b53e78"} Nov 24 12:09:07 crc kubenswrapper[4930]: I1124 12:09:07.359281 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-54lcm" event={"ID":"baaa4d3f-5068-4824-a874-eb5e484bcf5b","Type":"ContainerStarted","Data":"adbf95b33e54fa340c31f748e76aaacbfc393996e5fe819bfe8ae210b8655609"} Nov 24 12:09:07 crc kubenswrapper[4930]: I1124 12:09:07.359441 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-54lcm" Nov 24 12:09:07 crc kubenswrapper[4930]: I1124 12:09:07.361188 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-6cpnr" event={"ID":"cbbf065d-9533-4da3-80b7-0f20e160caf4","Type":"ContainerStarted","Data":"7d472d781565051ee0d06f1a8031448557d8e05e2c2481b4920e5f54b3d46ec8"} Nov 24 12:09:07 crc kubenswrapper[4930]: I1124 12:09:07.377731 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-7rggt" podStartSLOduration=2.532772202 podStartE2EDuration="13.3777052s" podCreationTimestamp="2025-11-24 12:08:54 +0000 UTC" firstStartedPulling="2025-11-24 12:08:55.344260335 +0000 UTC m=+581.958588285" lastFinishedPulling="2025-11-24 12:09:06.189193333 +0000 UTC m=+592.803521283" observedRunningTime="2025-11-24 12:09:07.375233378 +0000 UTC m=+593.989561328" watchObservedRunningTime="2025-11-24 12:09:07.3777052 +0000 UTC m=+593.992033150" Nov 24 12:09:07 crc kubenswrapper[4930]: I1124 12:09:07.390904 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-6cpnr" podStartSLOduration=2.63038284 podStartE2EDuration="13.390883976s" podCreationTimestamp="2025-11-24 12:08:54 +0000 UTC" firstStartedPulling="2025-11-24 12:08:55.494133591 +0000 UTC m=+582.108461542" lastFinishedPulling="2025-11-24 12:09:06.254634728 +0000 UTC m=+592.868962678" observedRunningTime="2025-11-24 12:09:07.388033323 +0000 UTC m=+594.002361263" watchObservedRunningTime="2025-11-24 12:09:07.390883976 +0000 UTC m=+594.005211936" Nov 24 12:09:07 crc kubenswrapper[4930]: I1124 12:09:07.428266 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-54lcm" podStartSLOduration=2.621478104 podStartE2EDuration="13.42824776s" podCreationTimestamp="2025-11-24 12:08:54 +0000 UTC" firstStartedPulling="2025-11-24 12:08:55.383380355 +0000 UTC m=+581.997708315" lastFinishedPulling="2025-11-24 12:09:06.190150021 +0000 UTC m=+592.804477971" observedRunningTime="2025-11-24 12:09:07.426292683 +0000 UTC m=+594.040620633" watchObservedRunningTime="2025-11-24 12:09:07.42824776 +0000 UTC m=+594.042575710" Nov 24 12:09:08 crc kubenswrapper[4930]: I1124 12:09:08.368626 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" event={"ID":"507868a2-ceea-4a0f-a512-40d6e805f872","Type":"ContainerStarted","Data":"0476e1f483adbf6633651a62cf317e9bf49425dedb83fa26b80b1e2abe935b5c"} Nov 24 12:09:08 crc kubenswrapper[4930]: I1124 12:09:08.368949 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" event={"ID":"507868a2-ceea-4a0f-a512-40d6e805f872","Type":"ContainerStarted","Data":"225251f197556d911befb9460f985c68c23fa8d0d0008afa50e4e01b685360fc"} Nov 24 12:09:08 crc kubenswrapper[4930]: I1124 12:09:08.368960 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" event={"ID":"507868a2-ceea-4a0f-a512-40d6e805f872","Type":"ContainerStarted","Data":"d1e3c4bb940ddbcff773ba85a4fb12fc4cbab36cd58d439012139ff3b59caccd"} Nov 24 12:09:08 crc kubenswrapper[4930]: I1124 12:09:08.368969 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" event={"ID":"507868a2-ceea-4a0f-a512-40d6e805f872","Type":"ContainerStarted","Data":"73686e2d65e65c502d245c1350ee5385dfdce20687e4f819bd1cbf37dee93792"} Nov 24 12:09:08 crc kubenswrapper[4930]: I1124 12:09:08.368978 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" event={"ID":"507868a2-ceea-4a0f-a512-40d6e805f872","Type":"ContainerStarted","Data":"56a4bec76d4bccb167c2a2630394191c47e627c8cd00baa2d868256e4118104b"} Nov 24 12:09:08 crc kubenswrapper[4930]: I1124 12:09:08.368990 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" event={"ID":"507868a2-ceea-4a0f-a512-40d6e805f872","Type":"ContainerStarted","Data":"7e1837e1560bf0ee1aaba22cdf87e772e736f005da66e7de762c1a5045bea4e7"} Nov 24 12:09:10 crc kubenswrapper[4930]: I1124 12:09:10.382247 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" event={"ID":"507868a2-ceea-4a0f-a512-40d6e805f872","Type":"ContainerStarted","Data":"a9abff3a874be0a41874ab071d39a505ae8fc42cc06f3eb5818c34e19bd519b3"} Nov 24 12:09:13 crc kubenswrapper[4930]: I1124 12:09:13.403515 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" event={"ID":"507868a2-ceea-4a0f-a512-40d6e805f872","Type":"ContainerStarted","Data":"ea80eb8e9bfe67caf593d9019c21d3d311c3c8b4d23aef90ff23a7fd04b88b96"} Nov 24 12:09:13 crc kubenswrapper[4930]: I1124 12:09:13.404109 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:13 crc kubenswrapper[4930]: I1124 12:09:13.404123 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:13 crc kubenswrapper[4930]: I1124 12:09:13.453113 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" podStartSLOduration=8.453096374 podStartE2EDuration="8.453096374s" podCreationTimestamp="2025-11-24 12:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:09:13.450390955 +0000 UTC m=+600.064718915" watchObservedRunningTime="2025-11-24 12:09:13.453096374 +0000 UTC m=+600.067424324" Nov 24 12:09:13 crc kubenswrapper[4930]: I1124 12:09:13.462246 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:14 crc kubenswrapper[4930]: I1124 12:09:14.301386 4930 scope.go:117] "RemoveContainer" containerID="219e9572546020c067cec54b04abeae3a965b2e440518a3908ba0b2bd6dd5e78" Nov 24 12:09:14 crc kubenswrapper[4930]: I1124 12:09:14.323275 4930 scope.go:117] "RemoveContainer" containerID="4303867d1df45887dd04ef0113c40b7ec05c4952cc50b40cd33f398b1866669a" Nov 24 12:09:14 crc kubenswrapper[4930]: I1124 12:09:14.340052 4930 scope.go:117] "RemoveContainer" containerID="29471652439bbb29c48825eafb23c0d462939bd3fef97e218bde9e4435bd8b6c" Nov 24 12:09:14 crc kubenswrapper[4930]: I1124 12:09:14.360169 4930 scope.go:117] "RemoveContainer" containerID="5228f2da0d0d57ecdb94cdcc8d4463fcdd425e51f7dc1420b22c50d7ba9cc6f2" Nov 24 12:09:14 crc kubenswrapper[4930]: I1124 12:09:14.409427 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:14 crc kubenswrapper[4930]: I1124 12:09:14.436764 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:15 crc kubenswrapper[4930]: I1124 12:09:15.057824 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-54lcm" Nov 24 12:09:16 crc kubenswrapper[4930]: I1124 12:09:16.085094 4930 scope.go:117] "RemoveContainer" containerID="58dd67e4f1a6eee0dddd3efb328f11e571b324eaebb707f289abac0be5b3a1d6" Nov 24 12:09:16 crc kubenswrapper[4930]: E1124 12:09:16.085696 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5lvxv_openshift-multus(68c34ffc-f1cd-4828-b83c-22bd0c02f364)\"" pod="openshift-multus/multus-5lvxv" podUID="68c34ffc-f1cd-4828-b83c-22bd0c02f364" Nov 24 12:09:29 crc kubenswrapper[4930]: I1124 12:09:29.084948 4930 scope.go:117] "RemoveContainer" containerID="58dd67e4f1a6eee0dddd3efb328f11e571b324eaebb707f289abac0be5b3a1d6" Nov 24 12:09:29 crc kubenswrapper[4930]: I1124 12:09:29.497740 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5lvxv_68c34ffc-f1cd-4828-b83c-22bd0c02f364/kube-multus/2.log" Nov 24 12:09:29 crc kubenswrapper[4930]: I1124 12:09:29.498128 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5lvxv" event={"ID":"68c34ffc-f1cd-4828-b83c-22bd0c02f364","Type":"ContainerStarted","Data":"5c7a16203bcb5138395e8965d702210ce83f3ce2c183665fd54b24526d212930"} Nov 24 12:09:31 crc kubenswrapper[4930]: I1124 12:09:31.809515 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:09:31 crc kubenswrapper[4930]: I1124 12:09:31.809905 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:09:35 crc kubenswrapper[4930]: I1124 12:09:35.448752 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bkqd8" Nov 24 12:09:53 crc kubenswrapper[4930]: I1124 12:09:53.858721 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4"] Nov 24 12:09:53 crc kubenswrapper[4930]: I1124 12:09:53.860494 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:53 crc kubenswrapper[4930]: I1124 12:09:53.864509 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 12:09:53 crc kubenswrapper[4930]: I1124 12:09:53.871240 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4"] Nov 24 12:09:53 crc kubenswrapper[4930]: I1124 12:09:53.980650 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:53 crc kubenswrapper[4930]: I1124 12:09:53.980709 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpfr5\" (UniqueName: \"kubernetes.io/projected/2a6820ef-bc97-4869-9957-a94fbefdb9d9-kube-api-access-bpfr5\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:53 crc kubenswrapper[4930]: I1124 12:09:53.980739 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:54 crc kubenswrapper[4930]: I1124 12:09:54.081888 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:54 crc kubenswrapper[4930]: I1124 12:09:54.082208 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpfr5\" (UniqueName: \"kubernetes.io/projected/2a6820ef-bc97-4869-9957-a94fbefdb9d9-kube-api-access-bpfr5\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:54 crc kubenswrapper[4930]: I1124 12:09:54.082308 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:54 crc kubenswrapper[4930]: I1124 12:09:54.083008 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:54 crc kubenswrapper[4930]: I1124 12:09:54.083046 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:54 crc kubenswrapper[4930]: I1124 12:09:54.101834 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpfr5\" (UniqueName: \"kubernetes.io/projected/2a6820ef-bc97-4869-9957-a94fbefdb9d9-kube-api-access-bpfr5\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:54 crc kubenswrapper[4930]: I1124 12:09:54.229339 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:54 crc kubenswrapper[4930]: I1124 12:09:54.410680 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4"] Nov 24 12:09:54 crc kubenswrapper[4930]: I1124 12:09:54.635077 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" event={"ID":"2a6820ef-bc97-4869-9957-a94fbefdb9d9","Type":"ContainerStarted","Data":"71e250f09ee3db9a6fe42975773a2266e80be38e815eac02394aba19dfca9a54"} Nov 24 12:09:54 crc kubenswrapper[4930]: I1124 12:09:54.635139 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" event={"ID":"2a6820ef-bc97-4869-9957-a94fbefdb9d9","Type":"ContainerStarted","Data":"dcac661034082a54e4c558976951e40d6c76a39b32f8d2617ede7f932a4f0eac"} Nov 24 12:09:55 crc kubenswrapper[4930]: I1124 12:09:55.642954 4930 generic.go:334] "Generic (PLEG): container finished" podID="2a6820ef-bc97-4869-9957-a94fbefdb9d9" containerID="71e250f09ee3db9a6fe42975773a2266e80be38e815eac02394aba19dfca9a54" exitCode=0 Nov 24 12:09:55 crc kubenswrapper[4930]: I1124 12:09:55.643032 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" event={"ID":"2a6820ef-bc97-4869-9957-a94fbefdb9d9","Type":"ContainerDied","Data":"71e250f09ee3db9a6fe42975773a2266e80be38e815eac02394aba19dfca9a54"} Nov 24 12:09:57 crc kubenswrapper[4930]: I1124 12:09:57.662289 4930 generic.go:334] "Generic (PLEG): container finished" podID="2a6820ef-bc97-4869-9957-a94fbefdb9d9" containerID="75dc1b2c824d6e02a04ea800db6e0639e6f3ca323988651a3728ebeb16bc88f5" exitCode=0 Nov 24 12:09:57 crc kubenswrapper[4930]: I1124 12:09:57.662335 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" event={"ID":"2a6820ef-bc97-4869-9957-a94fbefdb9d9","Type":"ContainerDied","Data":"75dc1b2c824d6e02a04ea800db6e0639e6f3ca323988651a3728ebeb16bc88f5"} Nov 24 12:09:58 crc kubenswrapper[4930]: I1124 12:09:58.671918 4930 generic.go:334] "Generic (PLEG): container finished" podID="2a6820ef-bc97-4869-9957-a94fbefdb9d9" containerID="8421344b7abe651de515bb1a2a952d3ff549945d461e705a8f14ac2ca0b53687" exitCode=0 Nov 24 12:09:58 crc kubenswrapper[4930]: I1124 12:09:58.672014 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" event={"ID":"2a6820ef-bc97-4869-9957-a94fbefdb9d9","Type":"ContainerDied","Data":"8421344b7abe651de515bb1a2a952d3ff549945d461e705a8f14ac2ca0b53687"} Nov 24 12:09:59 crc kubenswrapper[4930]: I1124 12:09:59.897354 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:09:59 crc kubenswrapper[4930]: I1124 12:09:59.981848 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-bundle\") pod \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " Nov 24 12:09:59 crc kubenswrapper[4930]: I1124 12:09:59.981974 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-util\") pod \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " Nov 24 12:09:59 crc kubenswrapper[4930]: I1124 12:09:59.982059 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpfr5\" (UniqueName: \"kubernetes.io/projected/2a6820ef-bc97-4869-9957-a94fbefdb9d9-kube-api-access-bpfr5\") pod \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\" (UID: \"2a6820ef-bc97-4869-9957-a94fbefdb9d9\") " Nov 24 12:09:59 crc kubenswrapper[4930]: I1124 12:09:59.983462 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-bundle" (OuterVolumeSpecName: "bundle") pod "2a6820ef-bc97-4869-9957-a94fbefdb9d9" (UID: "2a6820ef-bc97-4869-9957-a94fbefdb9d9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:09:59 crc kubenswrapper[4930]: I1124 12:09:59.983996 4930 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:09:59 crc kubenswrapper[4930]: I1124 12:09:59.989910 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6820ef-bc97-4869-9957-a94fbefdb9d9-kube-api-access-bpfr5" (OuterVolumeSpecName: "kube-api-access-bpfr5") pod "2a6820ef-bc97-4869-9957-a94fbefdb9d9" (UID: "2a6820ef-bc97-4869-9957-a94fbefdb9d9"). InnerVolumeSpecName "kube-api-access-bpfr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:10:00 crc kubenswrapper[4930]: I1124 12:10:00.085764 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpfr5\" (UniqueName: \"kubernetes.io/projected/2a6820ef-bc97-4869-9957-a94fbefdb9d9-kube-api-access-bpfr5\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:00 crc kubenswrapper[4930]: I1124 12:10:00.316554 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-util" (OuterVolumeSpecName: "util") pod "2a6820ef-bc97-4869-9957-a94fbefdb9d9" (UID: "2a6820ef-bc97-4869-9957-a94fbefdb9d9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:10:00 crc kubenswrapper[4930]: I1124 12:10:00.391702 4930 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2a6820ef-bc97-4869-9957-a94fbefdb9d9-util\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:00 crc kubenswrapper[4930]: I1124 12:10:00.684630 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" event={"ID":"2a6820ef-bc97-4869-9957-a94fbefdb9d9","Type":"ContainerDied","Data":"dcac661034082a54e4c558976951e40d6c76a39b32f8d2617ede7f932a4f0eac"} Nov 24 12:10:00 crc kubenswrapper[4930]: I1124 12:10:00.684670 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcac661034082a54e4c558976951e40d6c76a39b32f8d2617ede7f932a4f0eac" Nov 24 12:10:00 crc kubenswrapper[4930]: I1124 12:10:00.684718 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4" Nov 24 12:10:01 crc kubenswrapper[4930]: I1124 12:10:01.809350 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:10:01 crc kubenswrapper[4930]: I1124 12:10:01.809780 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:10:01 crc kubenswrapper[4930]: I1124 12:10:01.809840 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:10:01 crc kubenswrapper[4930]: I1124 12:10:01.810387 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8f379626591aee6b54cbd3b52ff203403645d621f59c13e50ebe6f8ffb4735c"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:10:01 crc kubenswrapper[4930]: I1124 12:10:01.810454 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://a8f379626591aee6b54cbd3b52ff203403645d621f59c13e50ebe6f8ffb4735c" gracePeriod=600 Nov 24 12:10:02 crc kubenswrapper[4930]: I1124 12:10:02.699146 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="a8f379626591aee6b54cbd3b52ff203403645d621f59c13e50ebe6f8ffb4735c" exitCode=0 Nov 24 12:10:02 crc kubenswrapper[4930]: I1124 12:10:02.699190 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"a8f379626591aee6b54cbd3b52ff203403645d621f59c13e50ebe6f8ffb4735c"} Nov 24 12:10:02 crc kubenswrapper[4930]: I1124 12:10:02.699828 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"c44ba46dc50db3a20b23969f9cbea1fb9792d70b783114e4cab0eaa15b434f1d"} Nov 24 12:10:02 crc kubenswrapper[4930]: I1124 12:10:02.699868 4930 scope.go:117] "RemoveContainer" containerID="3991dbeaed794b3c06979f1cfd6d6accfca0d3321783365d631089f10138ad78" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.391592 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-vkg6v"] Nov 24 12:10:05 crc kubenswrapper[4930]: E1124 12:10:05.392187 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6820ef-bc97-4869-9957-a94fbefdb9d9" containerName="pull" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.392206 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6820ef-bc97-4869-9957-a94fbefdb9d9" containerName="pull" Nov 24 12:10:05 crc kubenswrapper[4930]: E1124 12:10:05.392219 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6820ef-bc97-4869-9957-a94fbefdb9d9" containerName="extract" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.392227 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6820ef-bc97-4869-9957-a94fbefdb9d9" containerName="extract" Nov 24 12:10:05 crc kubenswrapper[4930]: E1124 12:10:05.392239 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6820ef-bc97-4869-9957-a94fbefdb9d9" containerName="util" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.392247 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6820ef-bc97-4869-9957-a94fbefdb9d9" containerName="util" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.392363 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6820ef-bc97-4869-9957-a94fbefdb9d9" containerName="extract" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.392887 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-vkg6v" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.396453 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-57xg5" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.396813 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.404410 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-vkg6v"] Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.406229 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.454191 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f87lc\" (UniqueName: \"kubernetes.io/projected/326bae6a-98bd-4c7a-adfe-68f5680ac766-kube-api-access-f87lc\") pod \"nmstate-operator-557fdffb88-vkg6v\" (UID: \"326bae6a-98bd-4c7a-adfe-68f5680ac766\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-vkg6v" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.556327 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f87lc\" (UniqueName: \"kubernetes.io/projected/326bae6a-98bd-4c7a-adfe-68f5680ac766-kube-api-access-f87lc\") pod \"nmstate-operator-557fdffb88-vkg6v\" (UID: \"326bae6a-98bd-4c7a-adfe-68f5680ac766\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-vkg6v" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.575324 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f87lc\" (UniqueName: \"kubernetes.io/projected/326bae6a-98bd-4c7a-adfe-68f5680ac766-kube-api-access-f87lc\") pod \"nmstate-operator-557fdffb88-vkg6v\" (UID: \"326bae6a-98bd-4c7a-adfe-68f5680ac766\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-vkg6v" Nov 24 12:10:05 crc kubenswrapper[4930]: I1124 12:10:05.720607 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-vkg6v" Nov 24 12:10:06 crc kubenswrapper[4930]: I1124 12:10:06.121291 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-vkg6v"] Nov 24 12:10:06 crc kubenswrapper[4930]: W1124 12:10:06.135525 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod326bae6a_98bd_4c7a_adfe_68f5680ac766.slice/crio-71c1df540616c52e3aad9f8b4935156321a01a937b95233e208630126be39d42 WatchSource:0}: Error finding container 71c1df540616c52e3aad9f8b4935156321a01a937b95233e208630126be39d42: Status 404 returned error can't find the container with id 71c1df540616c52e3aad9f8b4935156321a01a937b95233e208630126be39d42 Nov 24 12:10:06 crc kubenswrapper[4930]: I1124 12:10:06.723238 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-vkg6v" event={"ID":"326bae6a-98bd-4c7a-adfe-68f5680ac766","Type":"ContainerStarted","Data":"71c1df540616c52e3aad9f8b4935156321a01a937b95233e208630126be39d42"} Nov 24 12:10:08 crc kubenswrapper[4930]: I1124 12:10:08.737952 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-vkg6v" event={"ID":"326bae6a-98bd-4c7a-adfe-68f5680ac766","Type":"ContainerStarted","Data":"1744c58f708da95805145d6f0971b1b7af20de1c95ca02ecfe9dac3c8bb39c00"} Nov 24 12:10:08 crc kubenswrapper[4930]: I1124 12:10:08.754007 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-vkg6v" podStartSLOduration=1.929818585 podStartE2EDuration="3.753981137s" podCreationTimestamp="2025-11-24 12:10:05 +0000 UTC" firstStartedPulling="2025-11-24 12:10:06.137642191 +0000 UTC m=+652.751970141" lastFinishedPulling="2025-11-24 12:10:07.961804743 +0000 UTC m=+654.576132693" observedRunningTime="2025-11-24 12:10:08.750406323 +0000 UTC m=+655.364734293" watchObservedRunningTime="2025-11-24 12:10:08.753981137 +0000 UTC m=+655.368309107" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.178223 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm"] Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.185796 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.191783 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9mmdx" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.209443 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm"] Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.216884 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-569q4"] Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.259812 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.267817 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.271797 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-569q4"] Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.290276 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-zgbjb"] Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.291476 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.315910 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fd169461-4da3-47da-b2b5-d7c796f9eec9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-569q4\" (UID: \"fd169461-4da3-47da-b2b5-d7c796f9eec9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.316012 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhxp\" (UniqueName: \"kubernetes.io/projected/aa0b5808-c9b9-42b0-b585-1677b72ed1f3-kube-api-access-pqhxp\") pod \"nmstate-metrics-5dcf9c57c5-4tlrm\" (UID: \"aa0b5808-c9b9-42b0-b585-1677b72ed1f3\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.316047 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrp8x\" (UniqueName: \"kubernetes.io/projected/fd169461-4da3-47da-b2b5-d7c796f9eec9-kube-api-access-vrp8x\") pod \"nmstate-webhook-6b89b748d8-569q4\" (UID: \"fd169461-4da3-47da-b2b5-d7c796f9eec9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.398294 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm"] Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.399181 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.403072 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-g2mpz" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.405844 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.405840 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.417722 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrp8x\" (UniqueName: \"kubernetes.io/projected/fd169461-4da3-47da-b2b5-d7c796f9eec9-kube-api-access-vrp8x\") pod \"nmstate-webhook-6b89b748d8-569q4\" (UID: \"fd169461-4da3-47da-b2b5-d7c796f9eec9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.417815 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-nmstate-lock\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.417886 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-dbus-socket\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.417957 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fd169461-4da3-47da-b2b5-d7c796f9eec9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-569q4\" (UID: \"fd169461-4da3-47da-b2b5-d7c796f9eec9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:14 crc kubenswrapper[4930]: E1124 12:10:14.418054 4930 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.418074 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22b4x\" (UniqueName: \"kubernetes.io/projected/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-kube-api-access-22b4x\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: E1124 12:10:14.418133 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd169461-4da3-47da-b2b5-d7c796f9eec9-tls-key-pair podName:fd169461-4da3-47da-b2b5-d7c796f9eec9 nodeName:}" failed. No retries permitted until 2025-11-24 12:10:14.918106794 +0000 UTC m=+661.532434764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/fd169461-4da3-47da-b2b5-d7c796f9eec9-tls-key-pair") pod "nmstate-webhook-6b89b748d8-569q4" (UID: "fd169461-4da3-47da-b2b5-d7c796f9eec9") : secret "openshift-nmstate-webhook" not found Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.418286 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqhxp\" (UniqueName: \"kubernetes.io/projected/aa0b5808-c9b9-42b0-b585-1677b72ed1f3-kube-api-access-pqhxp\") pod \"nmstate-metrics-5dcf9c57c5-4tlrm\" (UID: \"aa0b5808-c9b9-42b0-b585-1677b72ed1f3\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.418327 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-ovs-socket\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.421911 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm"] Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.436739 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrp8x\" (UniqueName: \"kubernetes.io/projected/fd169461-4da3-47da-b2b5-d7c796f9eec9-kube-api-access-vrp8x\") pod \"nmstate-webhook-6b89b748d8-569q4\" (UID: \"fd169461-4da3-47da-b2b5-d7c796f9eec9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.450483 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqhxp\" (UniqueName: \"kubernetes.io/projected/aa0b5808-c9b9-42b0-b585-1677b72ed1f3-kube-api-access-pqhxp\") pod \"nmstate-metrics-5dcf9c57c5-4tlrm\" (UID: \"aa0b5808-c9b9-42b0-b585-1677b72ed1f3\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.520148 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-ovs-socket\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.520210 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/76de68fb-d44e-4e24-8843-18718d6763df-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-2mgrm\" (UID: \"76de68fb-d44e-4e24-8843-18718d6763df\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.520259 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-nmstate-lock\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.520314 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-dbus-socket\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.520343 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kr9k\" (UniqueName: \"kubernetes.io/projected/76de68fb-d44e-4e24-8843-18718d6763df-kube-api-access-7kr9k\") pod \"nmstate-console-plugin-5874bd7bc5-2mgrm\" (UID: \"76de68fb-d44e-4e24-8843-18718d6763df\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.520383 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/76de68fb-d44e-4e24-8843-18718d6763df-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-2mgrm\" (UID: \"76de68fb-d44e-4e24-8843-18718d6763df\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.520416 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22b4x\" (UniqueName: \"kubernetes.io/projected/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-kube-api-access-22b4x\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.520486 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-nmstate-lock\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.520614 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-ovs-socket\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.521116 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-dbus-socket\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.555912 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22b4x\" (UniqueName: \"kubernetes.io/projected/06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8-kube-api-access-22b4x\") pod \"nmstate-handler-zgbjb\" (UID: \"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8\") " pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.575431 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.595998 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-75fb55ffc9-m7pm7"] Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.596718 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.615299 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75fb55ffc9-m7pm7"] Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.615623 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.634691 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/76de68fb-d44e-4e24-8843-18718d6763df-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-2mgrm\" (UID: \"76de68fb-d44e-4e24-8843-18718d6763df\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.634896 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/76de68fb-d44e-4e24-8843-18718d6763df-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-2mgrm\" (UID: \"76de68fb-d44e-4e24-8843-18718d6763df\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.635012 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kr9k\" (UniqueName: \"kubernetes.io/projected/76de68fb-d44e-4e24-8843-18718d6763df-kube-api-access-7kr9k\") pod \"nmstate-console-plugin-5874bd7bc5-2mgrm\" (UID: \"76de68fb-d44e-4e24-8843-18718d6763df\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:14 crc kubenswrapper[4930]: E1124 12:10:14.635256 4930 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 24 12:10:14 crc kubenswrapper[4930]: E1124 12:10:14.635450 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76de68fb-d44e-4e24-8843-18718d6763df-plugin-serving-cert podName:76de68fb-d44e-4e24-8843-18718d6763df nodeName:}" failed. No retries permitted until 2025-11-24 12:10:15.135393034 +0000 UTC m=+661.749720984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/76de68fb-d44e-4e24-8843-18718d6763df-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-2mgrm" (UID: "76de68fb-d44e-4e24-8843-18718d6763df") : secret "plugin-serving-cert" not found Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.636239 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/76de68fb-d44e-4e24-8843-18718d6763df-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-2mgrm\" (UID: \"76de68fb-d44e-4e24-8843-18718d6763df\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.656931 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kr9k\" (UniqueName: \"kubernetes.io/projected/76de68fb-d44e-4e24-8843-18718d6763df-kube-api-access-7kr9k\") pod \"nmstate-console-plugin-5874bd7bc5-2mgrm\" (UID: \"76de68fb-d44e-4e24-8843-18718d6763df\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.736431 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-trusted-ca-bundle\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.736913 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqz6b\" (UniqueName: \"kubernetes.io/projected/e0790864-8024-4907-b809-9696c98d4171-kube-api-access-jqz6b\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.736974 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-oauth-serving-cert\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.737007 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0790864-8024-4907-b809-9696c98d4171-console-oauth-config\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.737040 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0790864-8024-4907-b809-9696c98d4171-console-serving-cert\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.737075 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-service-ca\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.737114 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-console-config\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.778890 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zgbjb" event={"ID":"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8","Type":"ContainerStarted","Data":"4701408df891cb5e00d0934a9ad0d587efd6a69c67e05b9fec4d7483b311e36f"} Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.803769 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm"] Nov 24 12:10:14 crc kubenswrapper[4930]: W1124 12:10:14.808788 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa0b5808_c9b9_42b0_b585_1677b72ed1f3.slice/crio-b61ee5eff2eb11334adefe8630eba4c77eb4a1000cfd4c988a02042ba6fbefd1 WatchSource:0}: Error finding container b61ee5eff2eb11334adefe8630eba4c77eb4a1000cfd4c988a02042ba6fbefd1: Status 404 returned error can't find the container with id b61ee5eff2eb11334adefe8630eba4c77eb4a1000cfd4c988a02042ba6fbefd1 Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.838575 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-service-ca\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.838628 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-console-config\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.838677 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-trusted-ca-bundle\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.838708 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqz6b\" (UniqueName: \"kubernetes.io/projected/e0790864-8024-4907-b809-9696c98d4171-kube-api-access-jqz6b\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.838737 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-oauth-serving-cert\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.838757 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0790864-8024-4907-b809-9696c98d4171-console-oauth-config\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.838780 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0790864-8024-4907-b809-9696c98d4171-console-serving-cert\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.840168 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-service-ca\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.840925 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-console-config\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.841108 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-trusted-ca-bundle\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.841172 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0790864-8024-4907-b809-9696c98d4171-oauth-serving-cert\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.844403 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0790864-8024-4907-b809-9696c98d4171-console-oauth-config\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.844973 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0790864-8024-4907-b809-9696c98d4171-console-serving-cert\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.858981 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqz6b\" (UniqueName: \"kubernetes.io/projected/e0790864-8024-4907-b809-9696c98d4171-kube-api-access-jqz6b\") pod \"console-75fb55ffc9-m7pm7\" (UID: \"e0790864-8024-4907-b809-9696c98d4171\") " pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.940145 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fd169461-4da3-47da-b2b5-d7c796f9eec9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-569q4\" (UID: \"fd169461-4da3-47da-b2b5-d7c796f9eec9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.944454 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fd169461-4da3-47da-b2b5-d7c796f9eec9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-569q4\" (UID: \"fd169461-4da3-47da-b2b5-d7c796f9eec9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:14 crc kubenswrapper[4930]: I1124 12:10:14.949830 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.142576 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/76de68fb-d44e-4e24-8843-18718d6763df-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-2mgrm\" (UID: \"76de68fb-d44e-4e24-8843-18718d6763df\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.147176 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/76de68fb-d44e-4e24-8843-18718d6763df-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-2mgrm\" (UID: \"76de68fb-d44e-4e24-8843-18718d6763df\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.148045 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75fb55ffc9-m7pm7"] Nov 24 12:10:15 crc kubenswrapper[4930]: W1124 12:10:15.155327 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0790864_8024_4907_b809_9696c98d4171.slice/crio-db3cdfa8699c30cde4114f03a83037de44f83ac61fd863e4a2ee8e8df81ea556 WatchSource:0}: Error finding container db3cdfa8699c30cde4114f03a83037de44f83ac61fd863e4a2ee8e8df81ea556: Status 404 returned error can't find the container with id db3cdfa8699c30cde4114f03a83037de44f83ac61fd863e4a2ee8e8df81ea556 Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.190924 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.320264 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.661577 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm"] Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.705519 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-569q4"] Nov 24 12:10:15 crc kubenswrapper[4930]: W1124 12:10:15.712872 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd169461_4da3_47da_b2b5_d7c796f9eec9.slice/crio-4e39bf680f3d2a635ffac5f228a546c89c763d92c9f314a9b22f1ddca6b2dfa0 WatchSource:0}: Error finding container 4e39bf680f3d2a635ffac5f228a546c89c763d92c9f314a9b22f1ddca6b2dfa0: Status 404 returned error can't find the container with id 4e39bf680f3d2a635ffac5f228a546c89c763d92c9f314a9b22f1ddca6b2dfa0 Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.787674 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" event={"ID":"76de68fb-d44e-4e24-8843-18718d6763df","Type":"ContainerStarted","Data":"e306d0ad51ea7f36865c95068bf27f09a6391c33f9b41cdfdae2f6cffe51bfad"} Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.789380 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm" event={"ID":"aa0b5808-c9b9-42b0-b585-1677b72ed1f3","Type":"ContainerStarted","Data":"b61ee5eff2eb11334adefe8630eba4c77eb4a1000cfd4c988a02042ba6fbefd1"} Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.791180 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75fb55ffc9-m7pm7" event={"ID":"e0790864-8024-4907-b809-9696c98d4171","Type":"ContainerStarted","Data":"477eaf6821ed29b287350ada92552654156c371aab93284600a7f1e1c1907dc3"} Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.791218 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75fb55ffc9-m7pm7" event={"ID":"e0790864-8024-4907-b809-9696c98d4171","Type":"ContainerStarted","Data":"db3cdfa8699c30cde4114f03a83037de44f83ac61fd863e4a2ee8e8df81ea556"} Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.793531 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" event={"ID":"fd169461-4da3-47da-b2b5-d7c796f9eec9","Type":"ContainerStarted","Data":"4e39bf680f3d2a635ffac5f228a546c89c763d92c9f314a9b22f1ddca6b2dfa0"} Nov 24 12:10:15 crc kubenswrapper[4930]: I1124 12:10:15.814452 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-75fb55ffc9-m7pm7" podStartSLOduration=1.814424722 podStartE2EDuration="1.814424722s" podCreationTimestamp="2025-11-24 12:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:10:15.810342403 +0000 UTC m=+662.424670353" watchObservedRunningTime="2025-11-24 12:10:15.814424722 +0000 UTC m=+662.428752672" Nov 24 12:10:17 crc kubenswrapper[4930]: I1124 12:10:17.807450 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" event={"ID":"fd169461-4da3-47da-b2b5-d7c796f9eec9","Type":"ContainerStarted","Data":"fed4f2c7f96d1d9262706cd54b3d0472446f4bfe3962cee7e99225918c488878"} Nov 24 12:10:17 crc kubenswrapper[4930]: I1124 12:10:17.807927 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:17 crc kubenswrapper[4930]: I1124 12:10:17.808600 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zgbjb" event={"ID":"06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8","Type":"ContainerStarted","Data":"4f4b6f2aeac0ed5306542f476697fe5c51d45c4106c80096974a2a08de9bcde9"} Nov 24 12:10:17 crc kubenswrapper[4930]: I1124 12:10:17.808705 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:17 crc kubenswrapper[4930]: I1124 12:10:17.810517 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm" event={"ID":"aa0b5808-c9b9-42b0-b585-1677b72ed1f3","Type":"ContainerStarted","Data":"95c030372c8e2c4a5f5dc5b1f7ed067ee3902c531e6a6701fd5555cc36239c95"} Nov 24 12:10:17 crc kubenswrapper[4930]: I1124 12:10:17.831220 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" podStartSLOduration=2.439455962 podStartE2EDuration="3.831193106s" podCreationTimestamp="2025-11-24 12:10:14 +0000 UTC" firstStartedPulling="2025-11-24 12:10:15.715907921 +0000 UTC m=+662.330235871" lastFinishedPulling="2025-11-24 12:10:17.107645065 +0000 UTC m=+663.721973015" observedRunningTime="2025-11-24 12:10:17.82444179 +0000 UTC m=+664.438769740" watchObservedRunningTime="2025-11-24 12:10:17.831193106 +0000 UTC m=+664.445521076" Nov 24 12:10:17 crc kubenswrapper[4930]: I1124 12:10:17.842900 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-zgbjb" podStartSLOduration=1.387157135 podStartE2EDuration="3.842880426s" podCreationTimestamp="2025-11-24 12:10:14 +0000 UTC" firstStartedPulling="2025-11-24 12:10:14.655299952 +0000 UTC m=+661.269627892" lastFinishedPulling="2025-11-24 12:10:17.111023233 +0000 UTC m=+663.725351183" observedRunningTime="2025-11-24 12:10:17.837033726 +0000 UTC m=+664.451361716" watchObservedRunningTime="2025-11-24 12:10:17.842880426 +0000 UTC m=+664.457208376" Nov 24 12:10:18 crc kubenswrapper[4930]: I1124 12:10:18.819671 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" event={"ID":"76de68fb-d44e-4e24-8843-18718d6763df","Type":"ContainerStarted","Data":"1c083c4152de5e4b3399c8cc8be9585a147cb8be98d6b42c170fd9f0a8902150"} Nov 24 12:10:18 crc kubenswrapper[4930]: I1124 12:10:18.840195 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-2mgrm" podStartSLOduration=2.421922523 podStartE2EDuration="4.840169856s" podCreationTimestamp="2025-11-24 12:10:14 +0000 UTC" firstStartedPulling="2025-11-24 12:10:15.671749319 +0000 UTC m=+662.286077279" lastFinishedPulling="2025-11-24 12:10:18.089996652 +0000 UTC m=+664.704324612" observedRunningTime="2025-11-24 12:10:18.834751289 +0000 UTC m=+665.449079239" watchObservedRunningTime="2025-11-24 12:10:18.840169856 +0000 UTC m=+665.454497826" Nov 24 12:10:19 crc kubenswrapper[4930]: I1124 12:10:19.828638 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm" event={"ID":"aa0b5808-c9b9-42b0-b585-1677b72ed1f3","Type":"ContainerStarted","Data":"16eb087ea1dc325cf64385751b6f96362ecc264b23b8e5df0e4238da68f92386"} Nov 24 12:10:19 crc kubenswrapper[4930]: I1124 12:10:19.846570 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4tlrm" podStartSLOduration=1.3693166159999999 podStartE2EDuration="5.84652204s" podCreationTimestamp="2025-11-24 12:10:14 +0000 UTC" firstStartedPulling="2025-11-24 12:10:14.811553699 +0000 UTC m=+661.425881649" lastFinishedPulling="2025-11-24 12:10:19.288759123 +0000 UTC m=+665.903087073" observedRunningTime="2025-11-24 12:10:19.844160481 +0000 UTC m=+666.458488451" watchObservedRunningTime="2025-11-24 12:10:19.84652204 +0000 UTC m=+666.460849990" Nov 24 12:10:24 crc kubenswrapper[4930]: I1124 12:10:24.646276 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-zgbjb" Nov 24 12:10:24 crc kubenswrapper[4930]: I1124 12:10:24.950919 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:24 crc kubenswrapper[4930]: I1124 12:10:24.951016 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:24 crc kubenswrapper[4930]: I1124 12:10:24.958993 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:25 crc kubenswrapper[4930]: I1124 12:10:25.867463 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-75fb55ffc9-m7pm7" Nov 24 12:10:25 crc kubenswrapper[4930]: I1124 12:10:25.918951 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qfqmk"] Nov 24 12:10:35 crc kubenswrapper[4930]: I1124 12:10:35.197727 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-569q4" Nov 24 12:10:47 crc kubenswrapper[4930]: I1124 12:10:47.819164 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q"] Nov 24 12:10:47 crc kubenswrapper[4930]: I1124 12:10:47.820976 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:47 crc kubenswrapper[4930]: I1124 12:10:47.822354 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q"] Nov 24 12:10:47 crc kubenswrapper[4930]: I1124 12:10:47.828664 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 12:10:47 crc kubenswrapper[4930]: I1124 12:10:47.920443 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqcr\" (UniqueName: \"kubernetes.io/projected/30194744-e459-4f4e-8f0c-5205d76aa5e0-kube-api-access-kmqcr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:47 crc kubenswrapper[4930]: I1124 12:10:47.920509 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:47 crc kubenswrapper[4930]: I1124 12:10:47.920716 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.021796 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.021873 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.021918 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmqcr\" (UniqueName: \"kubernetes.io/projected/30194744-e459-4f4e-8f0c-5205d76aa5e0-kube-api-access-kmqcr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.022702 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.022702 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.054160 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmqcr\" (UniqueName: \"kubernetes.io/projected/30194744-e459-4f4e-8f0c-5205d76aa5e0-kube-api-access-kmqcr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.151049 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.623607 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q"] Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.994986 4930 generic.go:334] "Generic (PLEG): container finished" podID="30194744-e459-4f4e-8f0c-5205d76aa5e0" containerID="71c3eeed337a4d95d0fa191542cd0dcc11fcd92ab860e35edc8dd35d1fc08c94" exitCode=0 Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.995036 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" event={"ID":"30194744-e459-4f4e-8f0c-5205d76aa5e0","Type":"ContainerDied","Data":"71c3eeed337a4d95d0fa191542cd0dcc11fcd92ab860e35edc8dd35d1fc08c94"} Nov 24 12:10:48 crc kubenswrapper[4930]: I1124 12:10:48.995233 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" event={"ID":"30194744-e459-4f4e-8f0c-5205d76aa5e0","Type":"ContainerStarted","Data":"042d6362f535fb3a3f8abdb0ef7210bedba1c6984c4c4d66efcc96bc24eb9add"} Nov 24 12:10:50 crc kubenswrapper[4930]: I1124 12:10:50.967957 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qfqmk" podUID="507084c7-1280-4943-bff6-497f1dc21c0a" containerName="console" containerID="cri-o://b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644" gracePeriod=15 Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.006284 4930 generic.go:334] "Generic (PLEG): container finished" podID="30194744-e459-4f4e-8f0c-5205d76aa5e0" containerID="af797e668b0c5f472b01be704890bfe36030de6cdd3e4797205a5dfef4472d29" exitCode=0 Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.006335 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" event={"ID":"30194744-e459-4f4e-8f0c-5205d76aa5e0","Type":"ContainerDied","Data":"af797e668b0c5f472b01be704890bfe36030de6cdd3e4797205a5dfef4472d29"} Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.315661 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qfqmk_507084c7-1280-4943-bff6-497f1dc21c0a/console/0.log" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.315940 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.467751 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-service-ca\") pod \"507084c7-1280-4943-bff6-497f1dc21c0a\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.467828 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-serving-cert\") pod \"507084c7-1280-4943-bff6-497f1dc21c0a\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.467859 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-console-config\") pod \"507084c7-1280-4943-bff6-497f1dc21c0a\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.467884 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-oauth-config\") pod \"507084c7-1280-4943-bff6-497f1dc21c0a\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.467957 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rvd\" (UniqueName: \"kubernetes.io/projected/507084c7-1280-4943-bff6-497f1dc21c0a-kube-api-access-p4rvd\") pod \"507084c7-1280-4943-bff6-497f1dc21c0a\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.467972 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-oauth-serving-cert\") pod \"507084c7-1280-4943-bff6-497f1dc21c0a\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.467997 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-trusted-ca-bundle\") pod \"507084c7-1280-4943-bff6-497f1dc21c0a\" (UID: \"507084c7-1280-4943-bff6-497f1dc21c0a\") " Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.468840 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-service-ca" (OuterVolumeSpecName: "service-ca") pod "507084c7-1280-4943-bff6-497f1dc21c0a" (UID: "507084c7-1280-4943-bff6-497f1dc21c0a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.468861 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "507084c7-1280-4943-bff6-497f1dc21c0a" (UID: "507084c7-1280-4943-bff6-497f1dc21c0a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.469261 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-console-config" (OuterVolumeSpecName: "console-config") pod "507084c7-1280-4943-bff6-497f1dc21c0a" (UID: "507084c7-1280-4943-bff6-497f1dc21c0a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.469332 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "507084c7-1280-4943-bff6-497f1dc21c0a" (UID: "507084c7-1280-4943-bff6-497f1dc21c0a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.473771 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "507084c7-1280-4943-bff6-497f1dc21c0a" (UID: "507084c7-1280-4943-bff6-497f1dc21c0a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.474199 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "507084c7-1280-4943-bff6-497f1dc21c0a" (UID: "507084c7-1280-4943-bff6-497f1dc21c0a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.474256 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507084c7-1280-4943-bff6-497f1dc21c0a-kube-api-access-p4rvd" (OuterVolumeSpecName: "kube-api-access-p4rvd") pod "507084c7-1280-4943-bff6-497f1dc21c0a" (UID: "507084c7-1280-4943-bff6-497f1dc21c0a"). InnerVolumeSpecName "kube-api-access-p4rvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.568886 4930 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.568915 4930 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.568924 4930 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/507084c7-1280-4943-bff6-497f1dc21c0a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.568932 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rvd\" (UniqueName: \"kubernetes.io/projected/507084c7-1280-4943-bff6-497f1dc21c0a-kube-api-access-p4rvd\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.568941 4930 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.568948 4930 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:51 crc kubenswrapper[4930]: I1124 12:10:51.568956 4930 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/507084c7-1280-4943-bff6-497f1dc21c0a-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.014739 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qfqmk_507084c7-1280-4943-bff6-497f1dc21c0a/console/0.log" Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.015088 4930 generic.go:334] "Generic (PLEG): container finished" podID="507084c7-1280-4943-bff6-497f1dc21c0a" containerID="b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644" exitCode=2 Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.015118 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qfqmk" event={"ID":"507084c7-1280-4943-bff6-497f1dc21c0a","Type":"ContainerDied","Data":"b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644"} Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.015144 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qfqmk" event={"ID":"507084c7-1280-4943-bff6-497f1dc21c0a","Type":"ContainerDied","Data":"e55dba9c5bcc5f7cc9f36396f056ff999bf57c283ab1d260840fdefc0a610c4e"} Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.015161 4930 scope.go:117] "RemoveContainer" containerID="b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644" Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.015217 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qfqmk" Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.030687 4930 scope.go:117] "RemoveContainer" containerID="b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644" Nov 24 12:10:52 crc kubenswrapper[4930]: E1124 12:10:52.031221 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644\": container with ID starting with b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644 not found: ID does not exist" containerID="b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644" Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.031274 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644"} err="failed to get container status \"b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644\": rpc error: code = NotFound desc = could not find container \"b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644\": container with ID starting with b78814f87cbb6135ad47e81884f102bd5fceaaf5e198f10bd296d6ac326de644 not found: ID does not exist" Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.045662 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qfqmk"] Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.051628 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qfqmk"] Nov 24 12:10:52 crc kubenswrapper[4930]: I1124 12:10:52.093097 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507084c7-1280-4943-bff6-497f1dc21c0a" path="/var/lib/kubelet/pods/507084c7-1280-4943-bff6-497f1dc21c0a/volumes" Nov 24 12:10:52 crc kubenswrapper[4930]: E1124 12:10:52.220067 4930 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30194744_e459_4f4e_8f0c_5205d76aa5e0.slice/crio-c60ca4271afcf250e635c4c037d5f1c34fa06c40a093e31c427e00d25f2dab89.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:10:53 crc kubenswrapper[4930]: I1124 12:10:53.025912 4930 generic.go:334] "Generic (PLEG): container finished" podID="30194744-e459-4f4e-8f0c-5205d76aa5e0" containerID="c60ca4271afcf250e635c4c037d5f1c34fa06c40a093e31c427e00d25f2dab89" exitCode=0 Nov 24 12:10:53 crc kubenswrapper[4930]: I1124 12:10:53.025956 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" event={"ID":"30194744-e459-4f4e-8f0c-5205d76aa5e0","Type":"ContainerDied","Data":"c60ca4271afcf250e635c4c037d5f1c34fa06c40a093e31c427e00d25f2dab89"} Nov 24 12:10:54 crc kubenswrapper[4930]: I1124 12:10:54.241929 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:54 crc kubenswrapper[4930]: I1124 12:10:54.405998 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmqcr\" (UniqueName: \"kubernetes.io/projected/30194744-e459-4f4e-8f0c-5205d76aa5e0-kube-api-access-kmqcr\") pod \"30194744-e459-4f4e-8f0c-5205d76aa5e0\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " Nov 24 12:10:54 crc kubenswrapper[4930]: I1124 12:10:54.406131 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-util\") pod \"30194744-e459-4f4e-8f0c-5205d76aa5e0\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " Nov 24 12:10:54 crc kubenswrapper[4930]: I1124 12:10:54.406156 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-bundle\") pod \"30194744-e459-4f4e-8f0c-5205d76aa5e0\" (UID: \"30194744-e459-4f4e-8f0c-5205d76aa5e0\") " Nov 24 12:10:54 crc kubenswrapper[4930]: I1124 12:10:54.407167 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-bundle" (OuterVolumeSpecName: "bundle") pod "30194744-e459-4f4e-8f0c-5205d76aa5e0" (UID: "30194744-e459-4f4e-8f0c-5205d76aa5e0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:10:54 crc kubenswrapper[4930]: I1124 12:10:54.414859 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30194744-e459-4f4e-8f0c-5205d76aa5e0-kube-api-access-kmqcr" (OuterVolumeSpecName: "kube-api-access-kmqcr") pod "30194744-e459-4f4e-8f0c-5205d76aa5e0" (UID: "30194744-e459-4f4e-8f0c-5205d76aa5e0"). InnerVolumeSpecName "kube-api-access-kmqcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:10:54 crc kubenswrapper[4930]: I1124 12:10:54.508063 4930 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:54 crc kubenswrapper[4930]: I1124 12:10:54.508098 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmqcr\" (UniqueName: \"kubernetes.io/projected/30194744-e459-4f4e-8f0c-5205d76aa5e0-kube-api-access-kmqcr\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:54 crc kubenswrapper[4930]: I1124 12:10:54.573117 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-util" (OuterVolumeSpecName: "util") pod "30194744-e459-4f4e-8f0c-5205d76aa5e0" (UID: "30194744-e459-4f4e-8f0c-5205d76aa5e0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:10:54 crc kubenswrapper[4930]: I1124 12:10:54.609009 4930 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30194744-e459-4f4e-8f0c-5205d76aa5e0-util\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:55 crc kubenswrapper[4930]: I1124 12:10:55.038995 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" Nov 24 12:10:55 crc kubenswrapper[4930]: I1124 12:10:55.038993 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q" event={"ID":"30194744-e459-4f4e-8f0c-5205d76aa5e0","Type":"ContainerDied","Data":"042d6362f535fb3a3f8abdb0ef7210bedba1c6984c4c4d66efcc96bc24eb9add"} Nov 24 12:10:55 crc kubenswrapper[4930]: I1124 12:10:55.039122 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="042d6362f535fb3a3f8abdb0ef7210bedba1c6984c4c4d66efcc96bc24eb9add" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.835863 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4"] Nov 24 12:11:10 crc kubenswrapper[4930]: E1124 12:11:10.836672 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30194744-e459-4f4e-8f0c-5205d76aa5e0" containerName="util" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.836687 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="30194744-e459-4f4e-8f0c-5205d76aa5e0" containerName="util" Nov 24 12:11:10 crc kubenswrapper[4930]: E1124 12:11:10.836703 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30194744-e459-4f4e-8f0c-5205d76aa5e0" containerName="pull" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.836709 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="30194744-e459-4f4e-8f0c-5205d76aa5e0" containerName="pull" Nov 24 12:11:10 crc kubenswrapper[4930]: E1124 12:11:10.836720 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30194744-e459-4f4e-8f0c-5205d76aa5e0" containerName="extract" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.836727 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="30194744-e459-4f4e-8f0c-5205d76aa5e0" containerName="extract" Nov 24 12:11:10 crc kubenswrapper[4930]: E1124 12:11:10.836746 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507084c7-1280-4943-bff6-497f1dc21c0a" containerName="console" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.836753 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="507084c7-1280-4943-bff6-497f1dc21c0a" containerName="console" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.836864 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="507084c7-1280-4943-bff6-497f1dc21c0a" containerName="console" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.836875 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="30194744-e459-4f4e-8f0c-5205d76aa5e0" containerName="extract" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.837337 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.841329 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.842194 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.842298 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.842651 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.842706 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-2qc9h" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.851864 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4"] Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.918370 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lknt9\" (UniqueName: \"kubernetes.io/projected/37f079f2-d796-4fce-8fdb-030a0a663e1b-kube-api-access-lknt9\") pod \"metallb-operator-controller-manager-6d8988b99d-fjfg4\" (UID: \"37f079f2-d796-4fce-8fdb-030a0a663e1b\") " pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.918446 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37f079f2-d796-4fce-8fdb-030a0a663e1b-webhook-cert\") pod \"metallb-operator-controller-manager-6d8988b99d-fjfg4\" (UID: \"37f079f2-d796-4fce-8fdb-030a0a663e1b\") " pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:10 crc kubenswrapper[4930]: I1124 12:11:10.918560 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37f079f2-d796-4fce-8fdb-030a0a663e1b-apiservice-cert\") pod \"metallb-operator-controller-manager-6d8988b99d-fjfg4\" (UID: \"37f079f2-d796-4fce-8fdb-030a0a663e1b\") " pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.020435 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37f079f2-d796-4fce-8fdb-030a0a663e1b-webhook-cert\") pod \"metallb-operator-controller-manager-6d8988b99d-fjfg4\" (UID: \"37f079f2-d796-4fce-8fdb-030a0a663e1b\") " pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.020809 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37f079f2-d796-4fce-8fdb-030a0a663e1b-apiservice-cert\") pod \"metallb-operator-controller-manager-6d8988b99d-fjfg4\" (UID: \"37f079f2-d796-4fce-8fdb-030a0a663e1b\") " pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.020985 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lknt9\" (UniqueName: \"kubernetes.io/projected/37f079f2-d796-4fce-8fdb-030a0a663e1b-kube-api-access-lknt9\") pod \"metallb-operator-controller-manager-6d8988b99d-fjfg4\" (UID: \"37f079f2-d796-4fce-8fdb-030a0a663e1b\") " pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.027655 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37f079f2-d796-4fce-8fdb-030a0a663e1b-webhook-cert\") pod \"metallb-operator-controller-manager-6d8988b99d-fjfg4\" (UID: \"37f079f2-d796-4fce-8fdb-030a0a663e1b\") " pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.027753 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37f079f2-d796-4fce-8fdb-030a0a663e1b-apiservice-cert\") pod \"metallb-operator-controller-manager-6d8988b99d-fjfg4\" (UID: \"37f079f2-d796-4fce-8fdb-030a0a663e1b\") " pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.042395 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lknt9\" (UniqueName: \"kubernetes.io/projected/37f079f2-d796-4fce-8fdb-030a0a663e1b-kube-api-access-lknt9\") pod \"metallb-operator-controller-manager-6d8988b99d-fjfg4\" (UID: \"37f079f2-d796-4fce-8fdb-030a0a663e1b\") " pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.108343 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-677786b954-pxf8r"] Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.109310 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.111717 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.112184 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.112781 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-pz75t" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.129556 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-677786b954-pxf8r"] Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.153462 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.223150 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5339f9f0-99ee-4ff8-90cc-8ab86611abc6-apiservice-cert\") pod \"metallb-operator-webhook-server-677786b954-pxf8r\" (UID: \"5339f9f0-99ee-4ff8-90cc-8ab86611abc6\") " pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.223494 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpmh5\" (UniqueName: \"kubernetes.io/projected/5339f9f0-99ee-4ff8-90cc-8ab86611abc6-kube-api-access-jpmh5\") pod \"metallb-operator-webhook-server-677786b954-pxf8r\" (UID: \"5339f9f0-99ee-4ff8-90cc-8ab86611abc6\") " pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.223598 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5339f9f0-99ee-4ff8-90cc-8ab86611abc6-webhook-cert\") pod \"metallb-operator-webhook-server-677786b954-pxf8r\" (UID: \"5339f9f0-99ee-4ff8-90cc-8ab86611abc6\") " pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.325453 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpmh5\" (UniqueName: \"kubernetes.io/projected/5339f9f0-99ee-4ff8-90cc-8ab86611abc6-kube-api-access-jpmh5\") pod \"metallb-operator-webhook-server-677786b954-pxf8r\" (UID: \"5339f9f0-99ee-4ff8-90cc-8ab86611abc6\") " pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.325802 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5339f9f0-99ee-4ff8-90cc-8ab86611abc6-webhook-cert\") pod \"metallb-operator-webhook-server-677786b954-pxf8r\" (UID: \"5339f9f0-99ee-4ff8-90cc-8ab86611abc6\") " pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.325837 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5339f9f0-99ee-4ff8-90cc-8ab86611abc6-apiservice-cert\") pod \"metallb-operator-webhook-server-677786b954-pxf8r\" (UID: \"5339f9f0-99ee-4ff8-90cc-8ab86611abc6\") " pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.352133 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5339f9f0-99ee-4ff8-90cc-8ab86611abc6-webhook-cert\") pod \"metallb-operator-webhook-server-677786b954-pxf8r\" (UID: \"5339f9f0-99ee-4ff8-90cc-8ab86611abc6\") " pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.355736 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpmh5\" (UniqueName: \"kubernetes.io/projected/5339f9f0-99ee-4ff8-90cc-8ab86611abc6-kube-api-access-jpmh5\") pod \"metallb-operator-webhook-server-677786b954-pxf8r\" (UID: \"5339f9f0-99ee-4ff8-90cc-8ab86611abc6\") " pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.372578 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5339f9f0-99ee-4ff8-90cc-8ab86611abc6-apiservice-cert\") pod \"metallb-operator-webhook-server-677786b954-pxf8r\" (UID: \"5339f9f0-99ee-4ff8-90cc-8ab86611abc6\") " pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.426739 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.618679 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4"] Nov 24 12:11:11 crc kubenswrapper[4930]: I1124 12:11:11.855125 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-677786b954-pxf8r"] Nov 24 12:11:11 crc kubenswrapper[4930]: W1124 12:11:11.862557 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5339f9f0_99ee_4ff8_90cc_8ab86611abc6.slice/crio-231561db8ba3078121c7e2bf97b14143f051a38e66f0fd642384b04740c8a360 WatchSource:0}: Error finding container 231561db8ba3078121c7e2bf97b14143f051a38e66f0fd642384b04740c8a360: Status 404 returned error can't find the container with id 231561db8ba3078121c7e2bf97b14143f051a38e66f0fd642384b04740c8a360 Nov 24 12:11:12 crc kubenswrapper[4930]: I1124 12:11:12.123834 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" event={"ID":"5339f9f0-99ee-4ff8-90cc-8ab86611abc6","Type":"ContainerStarted","Data":"231561db8ba3078121c7e2bf97b14143f051a38e66f0fd642384b04740c8a360"} Nov 24 12:11:12 crc kubenswrapper[4930]: I1124 12:11:12.125802 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" event={"ID":"37f079f2-d796-4fce-8fdb-030a0a663e1b","Type":"ContainerStarted","Data":"9e06297efc76138340e207f627868564cff9fba4030546038d89d514edf23e43"} Nov 24 12:11:15 crc kubenswrapper[4930]: I1124 12:11:15.148233 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" event={"ID":"37f079f2-d796-4fce-8fdb-030a0a663e1b","Type":"ContainerStarted","Data":"8682281b4c5317999d578cc9ec4cf954ca0c33943e270db9b81cbb8951a54124"} Nov 24 12:11:15 crc kubenswrapper[4930]: I1124 12:11:15.148740 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:15 crc kubenswrapper[4930]: I1124 12:11:15.176743 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" podStartSLOduration=2.34082541 podStartE2EDuration="5.176723281s" podCreationTimestamp="2025-11-24 12:11:10 +0000 UTC" firstStartedPulling="2025-11-24 12:11:11.626001962 +0000 UTC m=+718.240329912" lastFinishedPulling="2025-11-24 12:11:14.461899833 +0000 UTC m=+721.076227783" observedRunningTime="2025-11-24 12:11:15.168025229 +0000 UTC m=+721.782353189" watchObservedRunningTime="2025-11-24 12:11:15.176723281 +0000 UTC m=+721.791051231" Nov 24 12:11:18 crc kubenswrapper[4930]: I1124 12:11:18.170652 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" event={"ID":"5339f9f0-99ee-4ff8-90cc-8ab86611abc6","Type":"ContainerStarted","Data":"4145c634c1c160ee215a3c3cca9f90449bdf3d895c8434ef623a61e15581abe8"} Nov 24 12:11:18 crc kubenswrapper[4930]: I1124 12:11:18.171358 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:18 crc kubenswrapper[4930]: I1124 12:11:18.190171 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" podStartSLOduration=1.816726327 podStartE2EDuration="7.190147986s" podCreationTimestamp="2025-11-24 12:11:11 +0000 UTC" firstStartedPulling="2025-11-24 12:11:11.865076044 +0000 UTC m=+718.479403994" lastFinishedPulling="2025-11-24 12:11:17.238497703 +0000 UTC m=+723.852825653" observedRunningTime="2025-11-24 12:11:18.187816279 +0000 UTC m=+724.802144229" watchObservedRunningTime="2025-11-24 12:11:18.190147986 +0000 UTC m=+724.804475936" Nov 24 12:11:31 crc kubenswrapper[4930]: I1124 12:11:31.435760 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-677786b954-pxf8r" Nov 24 12:11:36 crc kubenswrapper[4930]: I1124 12:11:36.742390 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dkr44"] Nov 24 12:11:36 crc kubenswrapper[4930]: I1124 12:11:36.743158 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" podUID="8eada43a-ea1e-4565-a042-716f030ba99d" containerName="controller-manager" containerID="cri-o://db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935" gracePeriod=30 Nov 24 12:11:36 crc kubenswrapper[4930]: I1124 12:11:36.842320 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz"] Nov 24 12:11:36 crc kubenswrapper[4930]: I1124 12:11:36.842595 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" podUID="fed4ab08-54d0-4526-bd9a-3d1e660fc31a" containerName="route-controller-manager" containerID="cri-o://ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0" gracePeriod=30 Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.147986 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.210662 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.277449 4930 generic.go:334] "Generic (PLEG): container finished" podID="fed4ab08-54d0-4526-bd9a-3d1e660fc31a" containerID="ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0" exitCode=0 Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.277553 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.278076 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" event={"ID":"fed4ab08-54d0-4526-bd9a-3d1e660fc31a","Type":"ContainerDied","Data":"ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0"} Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.278125 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz" event={"ID":"fed4ab08-54d0-4526-bd9a-3d1e660fc31a","Type":"ContainerDied","Data":"7a05b8d720027524934d482317b70875988c970ba90db0724f824f127903fc0d"} Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.278149 4930 scope.go:117] "RemoveContainer" containerID="ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.281003 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-client-ca\") pod \"8eada43a-ea1e-4565-a042-716f030ba99d\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.281056 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl655\" (UniqueName: \"kubernetes.io/projected/8eada43a-ea1e-4565-a042-716f030ba99d-kube-api-access-bl655\") pod \"8eada43a-ea1e-4565-a042-716f030ba99d\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.281090 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-client-ca\") pod \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.281113 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-serving-cert\") pod \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.281138 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59xgd\" (UniqueName: \"kubernetes.io/projected/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-kube-api-access-59xgd\") pod \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.281184 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-config\") pod \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\" (UID: \"fed4ab08-54d0-4526-bd9a-3d1e660fc31a\") " Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.281218 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-config\") pod \"8eada43a-ea1e-4565-a042-716f030ba99d\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.281262 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eada43a-ea1e-4565-a042-716f030ba99d-serving-cert\") pod \"8eada43a-ea1e-4565-a042-716f030ba99d\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.281303 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-proxy-ca-bundles\") pod \"8eada43a-ea1e-4565-a042-716f030ba99d\" (UID: \"8eada43a-ea1e-4565-a042-716f030ba99d\") " Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.281785 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-client-ca" (OuterVolumeSpecName: "client-ca") pod "8eada43a-ea1e-4565-a042-716f030ba99d" (UID: "8eada43a-ea1e-4565-a042-716f030ba99d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.282071 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8eada43a-ea1e-4565-a042-716f030ba99d" (UID: "8eada43a-ea1e-4565-a042-716f030ba99d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.282148 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-client-ca" (OuterVolumeSpecName: "client-ca") pod "fed4ab08-54d0-4526-bd9a-3d1e660fc31a" (UID: "fed4ab08-54d0-4526-bd9a-3d1e660fc31a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.282499 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-config" (OuterVolumeSpecName: "config") pod "fed4ab08-54d0-4526-bd9a-3d1e660fc31a" (UID: "fed4ab08-54d0-4526-bd9a-3d1e660fc31a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.282588 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-config" (OuterVolumeSpecName: "config") pod "8eada43a-ea1e-4565-a042-716f030ba99d" (UID: "8eada43a-ea1e-4565-a042-716f030ba99d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.285125 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.285202 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" event={"ID":"8eada43a-ea1e-4565-a042-716f030ba99d","Type":"ContainerDied","Data":"db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935"} Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.287144 4930 generic.go:334] "Generic (PLEG): container finished" podID="8eada43a-ea1e-4565-a042-716f030ba99d" containerID="db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935" exitCode=0 Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.287206 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dkr44" event={"ID":"8eada43a-ea1e-4565-a042-716f030ba99d","Type":"ContainerDied","Data":"2738cce9c251a8602b9d22c33a0d9707b77747b851118f319e5c9f008e0dc0a0"} Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.287781 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fed4ab08-54d0-4526-bd9a-3d1e660fc31a" (UID: "fed4ab08-54d0-4526-bd9a-3d1e660fc31a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.289054 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eada43a-ea1e-4565-a042-716f030ba99d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8eada43a-ea1e-4565-a042-716f030ba99d" (UID: "8eada43a-ea1e-4565-a042-716f030ba99d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.289802 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eada43a-ea1e-4565-a042-716f030ba99d-kube-api-access-bl655" (OuterVolumeSpecName: "kube-api-access-bl655") pod "8eada43a-ea1e-4565-a042-716f030ba99d" (UID: "8eada43a-ea1e-4565-a042-716f030ba99d"). InnerVolumeSpecName "kube-api-access-bl655". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.290005 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-kube-api-access-59xgd" (OuterVolumeSpecName: "kube-api-access-59xgd") pod "fed4ab08-54d0-4526-bd9a-3d1e660fc31a" (UID: "fed4ab08-54d0-4526-bd9a-3d1e660fc31a"). InnerVolumeSpecName "kube-api-access-59xgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.296868 4930 scope.go:117] "RemoveContainer" containerID="ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0" Nov 24 12:11:37 crc kubenswrapper[4930]: E1124 12:11:37.297209 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0\": container with ID starting with ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0 not found: ID does not exist" containerID="ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.297248 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0"} err="failed to get container status \"ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0\": rpc error: code = NotFound desc = could not find container \"ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0\": container with ID starting with ab317a8bed770d0e507ec5d677d443b7eb12f2552b9583dd0d8a8a9b21ccd8f0 not found: ID does not exist" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.297272 4930 scope.go:117] "RemoveContainer" containerID="db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.318719 4930 scope.go:117] "RemoveContainer" containerID="db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935" Nov 24 12:11:37 crc kubenswrapper[4930]: E1124 12:11:37.320923 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935\": container with ID starting with db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935 not found: ID does not exist" containerID="db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.320962 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935"} err="failed to get container status \"db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935\": rpc error: code = NotFound desc = could not find container \"db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935\": container with ID starting with db946928ef655d99a6877240511608667d38504f1ce1c5a34bcb8c8c56fd5935 not found: ID does not exist" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.382808 4930 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.382854 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl655\" (UniqueName: \"kubernetes.io/projected/8eada43a-ea1e-4565-a042-716f030ba99d-kube-api-access-bl655\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.382869 4930 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.382880 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.382891 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59xgd\" (UniqueName: \"kubernetes.io/projected/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-kube-api-access-59xgd\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.382903 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed4ab08-54d0-4526-bd9a-3d1e660fc31a-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.382913 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.382924 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eada43a-ea1e-4565-a042-716f030ba99d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.382934 4930 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8eada43a-ea1e-4565-a042-716f030ba99d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.603161 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz"] Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.605877 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4ksnz"] Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.626732 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dkr44"] Nov 24 12:11:37 crc kubenswrapper[4930]: I1124 12:11:37.632302 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dkr44"] Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.092904 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eada43a-ea1e-4565-a042-716f030ba99d" path="/var/lib/kubelet/pods/8eada43a-ea1e-4565-a042-716f030ba99d/volumes" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.093951 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fed4ab08-54d0-4526-bd9a-3d1e660fc31a" path="/var/lib/kubelet/pods/fed4ab08-54d0-4526-bd9a-3d1e660fc31a/volumes" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.108166 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl"] Nov 24 12:11:38 crc kubenswrapper[4930]: E1124 12:11:38.108513 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eada43a-ea1e-4565-a042-716f030ba99d" containerName="controller-manager" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.108553 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eada43a-ea1e-4565-a042-716f030ba99d" containerName="controller-manager" Nov 24 12:11:38 crc kubenswrapper[4930]: E1124 12:11:38.108585 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed4ab08-54d0-4526-bd9a-3d1e660fc31a" containerName="route-controller-manager" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.108595 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed4ab08-54d0-4526-bd9a-3d1e660fc31a" containerName="route-controller-manager" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.108707 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eada43a-ea1e-4565-a042-716f030ba99d" containerName="controller-manager" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.108724 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="fed4ab08-54d0-4526-bd9a-3d1e660fc31a" containerName="route-controller-manager" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.109309 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.111420 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk"] Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.111797 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.112095 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.112161 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.112598 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.113323 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.113383 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.115010 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.115187 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.115304 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.115402 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.115728 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.115862 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.116090 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.120697 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.124104 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk"] Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.127597 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl"] Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.193211 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-config\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.193280 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pwdb\" (UniqueName: \"kubernetes.io/projected/43259b1a-7aba-4a41-bb2d-23b4b6179103-kube-api-access-6pwdb\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.193335 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-client-ca\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.193364 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-config\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.193414 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-serving-cert\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.193460 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43259b1a-7aba-4a41-bb2d-23b4b6179103-serving-cert\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.193495 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scwdd\" (UniqueName: \"kubernetes.io/projected/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-kube-api-access-scwdd\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.193552 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-proxy-ca-bundles\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.193578 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-client-ca\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.277784 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl"] Nov 24 12:11:38 crc kubenswrapper[4930]: E1124 12:11:38.278477 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-6pwdb proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" podUID="43259b1a-7aba-4a41-bb2d-23b4b6179103" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.293809 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.294441 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-config\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.294501 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pwdb\" (UniqueName: \"kubernetes.io/projected/43259b1a-7aba-4a41-bb2d-23b4b6179103-kube-api-access-6pwdb\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.294559 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-client-ca\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.294585 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-config\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.294630 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-serving-cert\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.294673 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43259b1a-7aba-4a41-bb2d-23b4b6179103-serving-cert\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.294702 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scwdd\" (UniqueName: \"kubernetes.io/projected/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-kube-api-access-scwdd\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.294737 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-proxy-ca-bundles\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.294760 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-client-ca\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.295481 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-client-ca\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.295749 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-client-ca\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.295779 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-config\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.296600 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-config\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.296981 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-proxy-ca-bundles\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.314436 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43259b1a-7aba-4a41-bb2d-23b4b6179103-serving-cert\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.314467 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-serving-cert\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.317547 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk"] Nov 24 12:11:38 crc kubenswrapper[4930]: E1124 12:11:38.317965 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-scwdd], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" podUID="7b68d72f-b5c6-43d6-8cd0-2876825d4df5" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.322506 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scwdd\" (UniqueName: \"kubernetes.io/projected/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-kube-api-access-scwdd\") pod \"route-controller-manager-554f6fc954-h4ndk\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.331705 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pwdb\" (UniqueName: \"kubernetes.io/projected/43259b1a-7aba-4a41-bb2d-23b4b6179103-kube-api-access-6pwdb\") pod \"controller-manager-c4d6b4cb8-hqshl\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.360207 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.496473 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43259b1a-7aba-4a41-bb2d-23b4b6179103-serving-cert\") pod \"43259b1a-7aba-4a41-bb2d-23b4b6179103\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.496878 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pwdb\" (UniqueName: \"kubernetes.io/projected/43259b1a-7aba-4a41-bb2d-23b4b6179103-kube-api-access-6pwdb\") pod \"43259b1a-7aba-4a41-bb2d-23b4b6179103\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.497066 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-client-ca\") pod \"43259b1a-7aba-4a41-bb2d-23b4b6179103\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.497254 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-config\") pod \"43259b1a-7aba-4a41-bb2d-23b4b6179103\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.497354 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-proxy-ca-bundles\") pod \"43259b1a-7aba-4a41-bb2d-23b4b6179103\" (UID: \"43259b1a-7aba-4a41-bb2d-23b4b6179103\") " Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.498097 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "43259b1a-7aba-4a41-bb2d-23b4b6179103" (UID: "43259b1a-7aba-4a41-bb2d-23b4b6179103"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.498123 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-client-ca" (OuterVolumeSpecName: "client-ca") pod "43259b1a-7aba-4a41-bb2d-23b4b6179103" (UID: "43259b1a-7aba-4a41-bb2d-23b4b6179103"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.498629 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-config" (OuterVolumeSpecName: "config") pod "43259b1a-7aba-4a41-bb2d-23b4b6179103" (UID: "43259b1a-7aba-4a41-bb2d-23b4b6179103"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.499380 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43259b1a-7aba-4a41-bb2d-23b4b6179103-kube-api-access-6pwdb" (OuterVolumeSpecName: "kube-api-access-6pwdb") pod "43259b1a-7aba-4a41-bb2d-23b4b6179103" (UID: "43259b1a-7aba-4a41-bb2d-23b4b6179103"). InnerVolumeSpecName "kube-api-access-6pwdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.501657 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43259b1a-7aba-4a41-bb2d-23b4b6179103-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "43259b1a-7aba-4a41-bb2d-23b4b6179103" (UID: "43259b1a-7aba-4a41-bb2d-23b4b6179103"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.599089 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pwdb\" (UniqueName: \"kubernetes.io/projected/43259b1a-7aba-4a41-bb2d-23b4b6179103-kube-api-access-6pwdb\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.599125 4930 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.599138 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.599150 4930 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43259b1a-7aba-4a41-bb2d-23b4b6179103-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:38 crc kubenswrapper[4930]: I1124 12:11:38.599163 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43259b1a-7aba-4a41-bb2d-23b4b6179103-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.307352 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.307849 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.323703 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.357574 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75477b5544-tr8n7"] Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.358772 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.364526 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.364902 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.365403 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.365758 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.365861 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.367872 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.368056 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.381559 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl"] Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.385283 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c4d6b4cb8-hqshl"] Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.388223 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75477b5544-tr8n7"] Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.408834 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-config\") pod \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.408879 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scwdd\" (UniqueName: \"kubernetes.io/projected/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-kube-api-access-scwdd\") pod \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.408902 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-serving-cert\") pod \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.408918 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-client-ca\") pod \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\" (UID: \"7b68d72f-b5c6-43d6-8cd0-2876825d4df5\") " Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.409482 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-client-ca" (OuterVolumeSpecName: "client-ca") pod "7b68d72f-b5c6-43d6-8cd0-2876825d4df5" (UID: "7b68d72f-b5c6-43d6-8cd0-2876825d4df5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.409493 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-config" (OuterVolumeSpecName: "config") pod "7b68d72f-b5c6-43d6-8cd0-2876825d4df5" (UID: "7b68d72f-b5c6-43d6-8cd0-2876825d4df5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.411448 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-kube-api-access-scwdd" (OuterVolumeSpecName: "kube-api-access-scwdd") pod "7b68d72f-b5c6-43d6-8cd0-2876825d4df5" (UID: "7b68d72f-b5c6-43d6-8cd0-2876825d4df5"). InnerVolumeSpecName "kube-api-access-scwdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.421955 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7b68d72f-b5c6-43d6-8cd0-2876825d4df5" (UID: "7b68d72f-b5c6-43d6-8cd0-2876825d4df5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.510730 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b856246-1dc2-4c69-9b52-7571973d51f1-client-ca\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.510790 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rk2m\" (UniqueName: \"kubernetes.io/projected/7b856246-1dc2-4c69-9b52-7571973d51f1-kube-api-access-7rk2m\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.510824 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b856246-1dc2-4c69-9b52-7571973d51f1-proxy-ca-bundles\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.510994 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b856246-1dc2-4c69-9b52-7571973d51f1-serving-cert\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.511146 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b856246-1dc2-4c69-9b52-7571973d51f1-config\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.511251 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.511266 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scwdd\" (UniqueName: \"kubernetes.io/projected/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-kube-api-access-scwdd\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.511278 4930 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.511299 4930 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b68d72f-b5c6-43d6-8cd0-2876825d4df5-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.612003 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b856246-1dc2-4c69-9b52-7571973d51f1-serving-cert\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.612074 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b856246-1dc2-4c69-9b52-7571973d51f1-config\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.612110 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b856246-1dc2-4c69-9b52-7571973d51f1-client-ca\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.612139 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rk2m\" (UniqueName: \"kubernetes.io/projected/7b856246-1dc2-4c69-9b52-7571973d51f1-kube-api-access-7rk2m\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.612163 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b856246-1dc2-4c69-9b52-7571973d51f1-proxy-ca-bundles\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.613181 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b856246-1dc2-4c69-9b52-7571973d51f1-client-ca\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.613254 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b856246-1dc2-4c69-9b52-7571973d51f1-proxy-ca-bundles\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.614831 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b856246-1dc2-4c69-9b52-7571973d51f1-config\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.615630 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b856246-1dc2-4c69-9b52-7571973d51f1-serving-cert\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.631048 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rk2m\" (UniqueName: \"kubernetes.io/projected/7b856246-1dc2-4c69-9b52-7571973d51f1-kube-api-access-7rk2m\") pod \"controller-manager-75477b5544-tr8n7\" (UID: \"7b856246-1dc2-4c69-9b52-7571973d51f1\") " pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.678646 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:39 crc kubenswrapper[4930]: I1124 12:11:39.991990 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75477b5544-tr8n7"] Nov 24 12:11:40 crc kubenswrapper[4930]: I1124 12:11:40.093783 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43259b1a-7aba-4a41-bb2d-23b4b6179103" path="/var/lib/kubelet/pods/43259b1a-7aba-4a41-bb2d-23b4b6179103/volumes" Nov 24 12:11:40 crc kubenswrapper[4930]: I1124 12:11:40.314392 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk" Nov 24 12:11:40 crc kubenswrapper[4930]: I1124 12:11:40.314389 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" event={"ID":"7b856246-1dc2-4c69-9b52-7571973d51f1","Type":"ContainerStarted","Data":"322431bb9f3e6ce835cce01deacc17e29d27cf4c6e2442002efafcc8cf8ea8f0"} Nov 24 12:11:40 crc kubenswrapper[4930]: I1124 12:11:40.316084 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" event={"ID":"7b856246-1dc2-4c69-9b52-7571973d51f1","Type":"ContainerStarted","Data":"ccae290390931a858d7f7ac6f0998d4e647a9ad96d51f59ab9f5fd6b23652cdd"} Nov 24 12:11:40 crc kubenswrapper[4930]: I1124 12:11:40.316156 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:40 crc kubenswrapper[4930]: I1124 12:11:40.321671 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" Nov 24 12:11:40 crc kubenswrapper[4930]: I1124 12:11:40.359029 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75477b5544-tr8n7" podStartSLOduration=2.359003725 podStartE2EDuration="2.359003725s" podCreationTimestamp="2025-11-24 12:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:11:40.33474397 +0000 UTC m=+746.949071940" watchObservedRunningTime="2025-11-24 12:11:40.359003725 +0000 UTC m=+746.973331675" Nov 24 12:11:40 crc kubenswrapper[4930]: I1124 12:11:40.459576 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk"] Nov 24 12:11:40 crc kubenswrapper[4930]: I1124 12:11:40.465444 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-554f6fc954-h4ndk"] Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.091445 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b68d72f-b5c6-43d6-8cd0-2876825d4df5" path="/var/lib/kubelet/pods/7b68d72f-b5c6-43d6-8cd0-2876825d4df5/volumes" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.111600 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm"] Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.112663 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.125875 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.126216 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.126310 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.126684 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.126684 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.126830 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.138763 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm"] Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.249454 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854cd72c-16a5-4bef-8069-9c814fcc2c07-config\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.249526 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854cd72c-16a5-4bef-8069-9c814fcc2c07-serving-cert\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.249564 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ndht\" (UniqueName: \"kubernetes.io/projected/854cd72c-16a5-4bef-8069-9c814fcc2c07-kube-api-access-5ndht\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.249619 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854cd72c-16a5-4bef-8069-9c814fcc2c07-client-ca\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.351307 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854cd72c-16a5-4bef-8069-9c814fcc2c07-serving-cert\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.351767 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ndht\" (UniqueName: \"kubernetes.io/projected/854cd72c-16a5-4bef-8069-9c814fcc2c07-kube-api-access-5ndht\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.351857 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854cd72c-16a5-4bef-8069-9c814fcc2c07-client-ca\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.351993 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854cd72c-16a5-4bef-8069-9c814fcc2c07-config\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.353508 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854cd72c-16a5-4bef-8069-9c814fcc2c07-config\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.354047 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854cd72c-16a5-4bef-8069-9c814fcc2c07-client-ca\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.370126 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854cd72c-16a5-4bef-8069-9c814fcc2c07-serving-cert\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.374932 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ndht\" (UniqueName: \"kubernetes.io/projected/854cd72c-16a5-4bef-8069-9c814fcc2c07-kube-api-access-5ndht\") pod \"route-controller-manager-9bdb56758-mkrrm\" (UID: \"854cd72c-16a5-4bef-8069-9c814fcc2c07\") " pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.431255 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:42 crc kubenswrapper[4930]: I1124 12:11:42.669758 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm"] Nov 24 12:11:42 crc kubenswrapper[4930]: W1124 12:11:42.682812 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod854cd72c_16a5_4bef_8069_9c814fcc2c07.slice/crio-f2c1ab4abbc054609287d4d8f71ca0d1a5b715acbfc08fbcc91770f7b3ceb9bd WatchSource:0}: Error finding container f2c1ab4abbc054609287d4d8f71ca0d1a5b715acbfc08fbcc91770f7b3ceb9bd: Status 404 returned error can't find the container with id f2c1ab4abbc054609287d4d8f71ca0d1a5b715acbfc08fbcc91770f7b3ceb9bd Nov 24 12:11:43 crc kubenswrapper[4930]: I1124 12:11:43.344984 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" event={"ID":"854cd72c-16a5-4bef-8069-9c814fcc2c07","Type":"ContainerStarted","Data":"dc8bed50e98c687cbe886157843e1d97f78ad796aa2aa824309f3d5dd374f5dd"} Nov 24 12:11:43 crc kubenswrapper[4930]: I1124 12:11:43.345427 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:43 crc kubenswrapper[4930]: I1124 12:11:43.345442 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" event={"ID":"854cd72c-16a5-4bef-8069-9c814fcc2c07","Type":"ContainerStarted","Data":"f2c1ab4abbc054609287d4d8f71ca0d1a5b715acbfc08fbcc91770f7b3ceb9bd"} Nov 24 12:11:43 crc kubenswrapper[4930]: I1124 12:11:43.354183 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" Nov 24 12:11:43 crc kubenswrapper[4930]: I1124 12:11:43.368880 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9bdb56758-mkrrm" podStartSLOduration=5.368863908 podStartE2EDuration="5.368863908s" podCreationTimestamp="2025-11-24 12:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:11:43.365525681 +0000 UTC m=+749.979853631" watchObservedRunningTime="2025-11-24 12:11:43.368863908 +0000 UTC m=+749.983191858" Nov 24 12:11:46 crc kubenswrapper[4930]: I1124 12:11:46.538156 4930 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.156913 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6d8988b99d-fjfg4" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.859091 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd"] Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.860395 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.862337 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bxwpz" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.862585 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.868052 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xpbvr"] Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.878404 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.883480 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.883932 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.889320 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd"] Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.976182 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-t7cvk"] Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.977381 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-t7cvk" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.981684 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.982080 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.982337 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.982576 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-gxrdp" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.984392 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-twjmq"] Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.985577 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.986580 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.993928 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-twjmq"] Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.997275 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-frr-conf\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.997319 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b7c02aa-da2a-43db-9985-96ae84d5e3df-metrics-certs\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.997340 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-reloader\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.997360 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/086e1816-851c-4997-b8f2-04563ff50e05-cert\") pod \"frr-k8s-webhook-server-6998585d5-tdgdd\" (UID: \"086e1816-851c-4997-b8f2-04563ff50e05\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.997377 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2b7c02aa-da2a-43db-9985-96ae84d5e3df-frr-startup\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.997396 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5tfc\" (UniqueName: \"kubernetes.io/projected/086e1816-851c-4997-b8f2-04563ff50e05-kube-api-access-w5tfc\") pod \"frr-k8s-webhook-server-6998585d5-tdgdd\" (UID: \"086e1816-851c-4997-b8f2-04563ff50e05\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.997417 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-frr-sockets\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.997491 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c964\" (UniqueName: \"kubernetes.io/projected/2b7c02aa-da2a-43db-9985-96ae84d5e3df-kube-api-access-4c964\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:51 crc kubenswrapper[4930]: I1124 12:11:51.997608 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-metrics\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099144 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-metrics\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099191 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2w4\" (UniqueName: \"kubernetes.io/projected/cdda2566-3ca8-492b-a37f-18a8beccb6a6-kube-api-access-7s2w4\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099225 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86addadb-2b19-4ba8-b365-0d5d5dd326c5-cert\") pod \"controller-6c7b4b5f48-twjmq\" (UID: \"86addadb-2b19-4ba8-b365-0d5d5dd326c5\") " pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099249 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-frr-conf\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099353 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-memberlist\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099407 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-metrics-certs\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099439 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dpmd\" (UniqueName: \"kubernetes.io/projected/86addadb-2b19-4ba8-b365-0d5d5dd326c5-kube-api-access-2dpmd\") pod \"controller-6c7b4b5f48-twjmq\" (UID: \"86addadb-2b19-4ba8-b365-0d5d5dd326c5\") " pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099478 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b7c02aa-da2a-43db-9985-96ae84d5e3df-metrics-certs\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099506 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cdda2566-3ca8-492b-a37f-18a8beccb6a6-metallb-excludel2\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099531 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-reloader\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099600 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/086e1816-851c-4997-b8f2-04563ff50e05-cert\") pod \"frr-k8s-webhook-server-6998585d5-tdgdd\" (UID: \"086e1816-851c-4997-b8f2-04563ff50e05\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099632 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2b7c02aa-da2a-43db-9985-96ae84d5e3df-frr-startup\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099613 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-metrics\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099681 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5tfc\" (UniqueName: \"kubernetes.io/projected/086e1816-851c-4997-b8f2-04563ff50e05-kube-api-access-w5tfc\") pod \"frr-k8s-webhook-server-6998585d5-tdgdd\" (UID: \"086e1816-851c-4997-b8f2-04563ff50e05\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099729 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-frr-conf\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.099734 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86addadb-2b19-4ba8-b365-0d5d5dd326c5-metrics-certs\") pod \"controller-6c7b4b5f48-twjmq\" (UID: \"86addadb-2b19-4ba8-b365-0d5d5dd326c5\") " pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.100063 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-frr-sockets\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.100122 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c964\" (UniqueName: \"kubernetes.io/projected/2b7c02aa-da2a-43db-9985-96ae84d5e3df-kube-api-access-4c964\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.100907 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-reloader\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.103974 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2b7c02aa-da2a-43db-9985-96ae84d5e3df-frr-sockets\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.104422 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2b7c02aa-da2a-43db-9985-96ae84d5e3df-frr-startup\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.114106 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b7c02aa-da2a-43db-9985-96ae84d5e3df-metrics-certs\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.119830 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5tfc\" (UniqueName: \"kubernetes.io/projected/086e1816-851c-4997-b8f2-04563ff50e05-kube-api-access-w5tfc\") pod \"frr-k8s-webhook-server-6998585d5-tdgdd\" (UID: \"086e1816-851c-4997-b8f2-04563ff50e05\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.120753 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c964\" (UniqueName: \"kubernetes.io/projected/2b7c02aa-da2a-43db-9985-96ae84d5e3df-kube-api-access-4c964\") pod \"frr-k8s-xpbvr\" (UID: \"2b7c02aa-da2a-43db-9985-96ae84d5e3df\") " pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.126996 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/086e1816-851c-4997-b8f2-04563ff50e05-cert\") pod \"frr-k8s-webhook-server-6998585d5-tdgdd\" (UID: \"086e1816-851c-4997-b8f2-04563ff50e05\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.199760 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.201639 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86addadb-2b19-4ba8-b365-0d5d5dd326c5-metrics-certs\") pod \"controller-6c7b4b5f48-twjmq\" (UID: \"86addadb-2b19-4ba8-b365-0d5d5dd326c5\") " pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.201807 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2w4\" (UniqueName: \"kubernetes.io/projected/cdda2566-3ca8-492b-a37f-18a8beccb6a6-kube-api-access-7s2w4\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.201871 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86addadb-2b19-4ba8-b365-0d5d5dd326c5-cert\") pod \"controller-6c7b4b5f48-twjmq\" (UID: \"86addadb-2b19-4ba8-b365-0d5d5dd326c5\") " pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.201926 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-memberlist\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.201948 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-metrics-certs\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.201968 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dpmd\" (UniqueName: \"kubernetes.io/projected/86addadb-2b19-4ba8-b365-0d5d5dd326c5-kube-api-access-2dpmd\") pod \"controller-6c7b4b5f48-twjmq\" (UID: \"86addadb-2b19-4ba8-b365-0d5d5dd326c5\") " pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.202006 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cdda2566-3ca8-492b-a37f-18a8beccb6a6-metallb-excludel2\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: E1124 12:11:52.202471 4930 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 12:11:52 crc kubenswrapper[4930]: E1124 12:11:52.202654 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-memberlist podName:cdda2566-3ca8-492b-a37f-18a8beccb6a6 nodeName:}" failed. No retries permitted until 2025-11-24 12:11:52.70263106 +0000 UTC m=+759.316959010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-memberlist") pod "speaker-t7cvk" (UID: "cdda2566-3ca8-492b-a37f-18a8beccb6a6") : secret "metallb-memberlist" not found Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.202778 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cdda2566-3ca8-492b-a37f-18a8beccb6a6-metallb-excludel2\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.204843 4930 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.205249 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86addadb-2b19-4ba8-b365-0d5d5dd326c5-metrics-certs\") pod \"controller-6c7b4b5f48-twjmq\" (UID: \"86addadb-2b19-4ba8-b365-0d5d5dd326c5\") " pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.208852 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-metrics-certs\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.216264 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.216485 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86addadb-2b19-4ba8-b365-0d5d5dd326c5-cert\") pod \"controller-6c7b4b5f48-twjmq\" (UID: \"86addadb-2b19-4ba8-b365-0d5d5dd326c5\") " pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.222271 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dpmd\" (UniqueName: \"kubernetes.io/projected/86addadb-2b19-4ba8-b365-0d5d5dd326c5-kube-api-access-2dpmd\") pod \"controller-6c7b4b5f48-twjmq\" (UID: \"86addadb-2b19-4ba8-b365-0d5d5dd326c5\") " pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.230703 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2w4\" (UniqueName: \"kubernetes.io/projected/cdda2566-3ca8-492b-a37f-18a8beccb6a6-kube-api-access-7s2w4\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.305150 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.398372 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpbvr" event={"ID":"2b7c02aa-da2a-43db-9985-96ae84d5e3df","Type":"ContainerStarted","Data":"7916d345d8d0baa8ae24e4db1b856973f61a7ac4cbbe92ac6ef4c75ee067d385"} Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.606836 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd"] Nov 24 12:11:52 crc kubenswrapper[4930]: W1124 12:11:52.609927 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod086e1816_851c_4997_b8f2_04563ff50e05.slice/crio-afa1948569a6e201853811d103ee67514426fa436ce17cb45a0f3d47b02afd06 WatchSource:0}: Error finding container afa1948569a6e201853811d103ee67514426fa436ce17cb45a0f3d47b02afd06: Status 404 returned error can't find the container with id afa1948569a6e201853811d103ee67514426fa436ce17cb45a0f3d47b02afd06 Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.711791 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-memberlist\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:52 crc kubenswrapper[4930]: E1124 12:11:52.712060 4930 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 12:11:52 crc kubenswrapper[4930]: E1124 12:11:52.712186 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-memberlist podName:cdda2566-3ca8-492b-a37f-18a8beccb6a6 nodeName:}" failed. No retries permitted until 2025-11-24 12:11:53.712156566 +0000 UTC m=+760.326484516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-memberlist") pod "speaker-t7cvk" (UID: "cdda2566-3ca8-492b-a37f-18a8beccb6a6") : secret "metallb-memberlist" not found Nov 24 12:11:52 crc kubenswrapper[4930]: I1124 12:11:52.727281 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-twjmq"] Nov 24 12:11:52 crc kubenswrapper[4930]: W1124 12:11:52.739053 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86addadb_2b19_4ba8_b365_0d5d5dd326c5.slice/crio-e779e554b62c7f50ea46d37cb77099bee1f8b75fae088ba6dfb7fe75d851f8d8 WatchSource:0}: Error finding container e779e554b62c7f50ea46d37cb77099bee1f8b75fae088ba6dfb7fe75d851f8d8: Status 404 returned error can't find the container with id e779e554b62c7f50ea46d37cb77099bee1f8b75fae088ba6dfb7fe75d851f8d8 Nov 24 12:11:53 crc kubenswrapper[4930]: I1124 12:11:53.406963 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-twjmq" event={"ID":"86addadb-2b19-4ba8-b365-0d5d5dd326c5","Type":"ContainerStarted","Data":"56703b533f8712e121530801ee18c9efdcf205d937c7e43fe2e960823c3f6386"} Nov 24 12:11:53 crc kubenswrapper[4930]: I1124 12:11:53.407324 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:11:53 crc kubenswrapper[4930]: I1124 12:11:53.407334 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-twjmq" event={"ID":"86addadb-2b19-4ba8-b365-0d5d5dd326c5","Type":"ContainerStarted","Data":"d79fa9af34f29a6a817150dcfc41f287493df6978b6585e4b4fb12765069ba4e"} Nov 24 12:11:53 crc kubenswrapper[4930]: I1124 12:11:53.407343 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-twjmq" event={"ID":"86addadb-2b19-4ba8-b365-0d5d5dd326c5","Type":"ContainerStarted","Data":"e779e554b62c7f50ea46d37cb77099bee1f8b75fae088ba6dfb7fe75d851f8d8"} Nov 24 12:11:53 crc kubenswrapper[4930]: I1124 12:11:53.407735 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" event={"ID":"086e1816-851c-4997-b8f2-04563ff50e05","Type":"ContainerStarted","Data":"afa1948569a6e201853811d103ee67514426fa436ce17cb45a0f3d47b02afd06"} Nov 24 12:11:53 crc kubenswrapper[4930]: I1124 12:11:53.428095 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-twjmq" podStartSLOduration=2.428078256 podStartE2EDuration="2.428078256s" podCreationTimestamp="2025-11-24 12:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:11:53.423488513 +0000 UTC m=+760.037816463" watchObservedRunningTime="2025-11-24 12:11:53.428078256 +0000 UTC m=+760.042406206" Nov 24 12:11:53 crc kubenswrapper[4930]: I1124 12:11:53.726064 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-memberlist\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:53 crc kubenswrapper[4930]: I1124 12:11:53.733759 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cdda2566-3ca8-492b-a37f-18a8beccb6a6-memberlist\") pod \"speaker-t7cvk\" (UID: \"cdda2566-3ca8-492b-a37f-18a8beccb6a6\") " pod="metallb-system/speaker-t7cvk" Nov 24 12:11:53 crc kubenswrapper[4930]: I1124 12:11:53.811086 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-t7cvk" Nov 24 12:11:54 crc kubenswrapper[4930]: I1124 12:11:54.415661 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t7cvk" event={"ID":"cdda2566-3ca8-492b-a37f-18a8beccb6a6","Type":"ContainerStarted","Data":"8fcb37c0d5dd6f983cc3e8ad0247fdfdb9a273fcaebda48d7876ab1d629d863d"} Nov 24 12:11:54 crc kubenswrapper[4930]: I1124 12:11:54.416051 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t7cvk" event={"ID":"cdda2566-3ca8-492b-a37f-18a8beccb6a6","Type":"ContainerStarted","Data":"2fe585d7698c72be422a13d909af5c22fc35911224a5e707f18c8ce64b1fdb83"} Nov 24 12:11:54 crc kubenswrapper[4930]: I1124 12:11:54.416070 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t7cvk" event={"ID":"cdda2566-3ca8-492b-a37f-18a8beccb6a6","Type":"ContainerStarted","Data":"14394ea70ad95a433e55f3e9552e498808825b71cdc79e7ebc635a89d42eedb8"} Nov 24 12:11:54 crc kubenswrapper[4930]: I1124 12:11:54.416240 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-t7cvk" Nov 24 12:11:54 crc kubenswrapper[4930]: I1124 12:11:54.439451 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-t7cvk" podStartSLOduration=3.439430894 podStartE2EDuration="3.439430894s" podCreationTimestamp="2025-11-24 12:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:11:54.435088808 +0000 UTC m=+761.049416768" watchObservedRunningTime="2025-11-24 12:11:54.439430894 +0000 UTC m=+761.053758844" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.555103 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2slvn"] Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.556495 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.569061 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2slvn"] Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.664899 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-utilities\") pod \"community-operators-2slvn\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.665088 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-catalog-content\") pod \"community-operators-2slvn\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.665156 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45v6d\" (UniqueName: \"kubernetes.io/projected/3aef4010-162a-40dc-9841-4d0e64d1bae2-kube-api-access-45v6d\") pod \"community-operators-2slvn\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.766241 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-catalog-content\") pod \"community-operators-2slvn\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.766308 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45v6d\" (UniqueName: \"kubernetes.io/projected/3aef4010-162a-40dc-9841-4d0e64d1bae2-kube-api-access-45v6d\") pod \"community-operators-2slvn\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.766342 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-utilities\") pod \"community-operators-2slvn\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.766824 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-catalog-content\") pod \"community-operators-2slvn\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.766905 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-utilities\") pod \"community-operators-2slvn\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.804089 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45v6d\" (UniqueName: \"kubernetes.io/projected/3aef4010-162a-40dc-9841-4d0e64d1bae2-kube-api-access-45v6d\") pod \"community-operators-2slvn\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:56 crc kubenswrapper[4930]: I1124 12:11:56.903937 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:11:59 crc kubenswrapper[4930]: I1124 12:11:59.661768 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2slvn"] Nov 24 12:11:59 crc kubenswrapper[4930]: W1124 12:11:59.671695 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aef4010_162a_40dc_9841_4d0e64d1bae2.slice/crio-29468f95b7d466c468780303a2eb64f28359ff3d524b44f4d8fad81dc91614ae WatchSource:0}: Error finding container 29468f95b7d466c468780303a2eb64f28359ff3d524b44f4d8fad81dc91614ae: Status 404 returned error can't find the container with id 29468f95b7d466c468780303a2eb64f28359ff3d524b44f4d8fad81dc91614ae Nov 24 12:12:00 crc kubenswrapper[4930]: I1124 12:12:00.461160 4930 generic.go:334] "Generic (PLEG): container finished" podID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerID="206536122fe30125ae8bfb563b241169a815613e9ea91ee980f11e70450397a1" exitCode=0 Nov 24 12:12:00 crc kubenswrapper[4930]: I1124 12:12:00.461419 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2slvn" event={"ID":"3aef4010-162a-40dc-9841-4d0e64d1bae2","Type":"ContainerDied","Data":"206536122fe30125ae8bfb563b241169a815613e9ea91ee980f11e70450397a1"} Nov 24 12:12:00 crc kubenswrapper[4930]: I1124 12:12:00.461511 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2slvn" event={"ID":"3aef4010-162a-40dc-9841-4d0e64d1bae2","Type":"ContainerStarted","Data":"29468f95b7d466c468780303a2eb64f28359ff3d524b44f4d8fad81dc91614ae"} Nov 24 12:12:00 crc kubenswrapper[4930]: I1124 12:12:00.464069 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" event={"ID":"086e1816-851c-4997-b8f2-04563ff50e05","Type":"ContainerStarted","Data":"15821200eb5bb282eb778ce5e07a9dd7776b029fc56cb90ed5040879c841eaa4"} Nov 24 12:12:00 crc kubenswrapper[4930]: I1124 12:12:00.464616 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" Nov 24 12:12:00 crc kubenswrapper[4930]: I1124 12:12:00.467458 4930 generic.go:334] "Generic (PLEG): container finished" podID="2b7c02aa-da2a-43db-9985-96ae84d5e3df" containerID="d155622239cb89f0e6eb7a2f81bdcea7bd568806411c21a6a57347044ef4ff41" exitCode=0 Nov 24 12:12:00 crc kubenswrapper[4930]: I1124 12:12:00.467491 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpbvr" event={"ID":"2b7c02aa-da2a-43db-9985-96ae84d5e3df","Type":"ContainerDied","Data":"d155622239cb89f0e6eb7a2f81bdcea7bd568806411c21a6a57347044ef4ff41"} Nov 24 12:12:00 crc kubenswrapper[4930]: I1124 12:12:00.498319 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" podStartSLOduration=2.723215887 podStartE2EDuration="9.498302798s" podCreationTimestamp="2025-11-24 12:11:51 +0000 UTC" firstStartedPulling="2025-11-24 12:11:52.612349788 +0000 UTC m=+759.226677738" lastFinishedPulling="2025-11-24 12:11:59.387436709 +0000 UTC m=+766.001764649" observedRunningTime="2025-11-24 12:12:00.495052234 +0000 UTC m=+767.109380204" watchObservedRunningTime="2025-11-24 12:12:00.498302798 +0000 UTC m=+767.112630738" Nov 24 12:12:01 crc kubenswrapper[4930]: I1124 12:12:01.480935 4930 generic.go:334] "Generic (PLEG): container finished" podID="2b7c02aa-da2a-43db-9985-96ae84d5e3df" containerID="794e2dcc5f1dff1be9164285049048460472ebc5f0c1a1a34e932ff92600291c" exitCode=0 Nov 24 12:12:01 crc kubenswrapper[4930]: I1124 12:12:01.481001 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpbvr" event={"ID":"2b7c02aa-da2a-43db-9985-96ae84d5e3df","Type":"ContainerDied","Data":"794e2dcc5f1dff1be9164285049048460472ebc5f0c1a1a34e932ff92600291c"} Nov 24 12:12:01 crc kubenswrapper[4930]: I1124 12:12:01.483017 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2slvn" event={"ID":"3aef4010-162a-40dc-9841-4d0e64d1bae2","Type":"ContainerStarted","Data":"c4f04663b292ae8a3dceb1c86de5e754e6c5355c8247eab2bafdb23bc5247b1d"} Nov 24 12:12:02 crc kubenswrapper[4930]: I1124 12:12:02.308866 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-twjmq" Nov 24 12:12:02 crc kubenswrapper[4930]: I1124 12:12:02.489612 4930 generic.go:334] "Generic (PLEG): container finished" podID="2b7c02aa-da2a-43db-9985-96ae84d5e3df" containerID="c127a7b37bb647061546078f46023067311cfb25345fd95aa10a6c50f23daea8" exitCode=0 Nov 24 12:12:02 crc kubenswrapper[4930]: I1124 12:12:02.489663 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpbvr" event={"ID":"2b7c02aa-da2a-43db-9985-96ae84d5e3df","Type":"ContainerDied","Data":"c127a7b37bb647061546078f46023067311cfb25345fd95aa10a6c50f23daea8"} Nov 24 12:12:02 crc kubenswrapper[4930]: I1124 12:12:02.491811 4930 generic.go:334] "Generic (PLEG): container finished" podID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerID="c4f04663b292ae8a3dceb1c86de5e754e6c5355c8247eab2bafdb23bc5247b1d" exitCode=0 Nov 24 12:12:02 crc kubenswrapper[4930]: I1124 12:12:02.492271 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2slvn" event={"ID":"3aef4010-162a-40dc-9841-4d0e64d1bae2","Type":"ContainerDied","Data":"c4f04663b292ae8a3dceb1c86de5e754e6c5355c8247eab2bafdb23bc5247b1d"} Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.270070 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tvblz"] Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.271388 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.284706 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tvblz"] Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.364877 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-catalog-content\") pod \"redhat-operators-tvblz\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.364980 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-utilities\") pod \"redhat-operators-tvblz\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.365002 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjlsp\" (UniqueName: \"kubernetes.io/projected/770736f7-d373-4c1c-8980-5f71763d1b26-kube-api-access-rjlsp\") pod \"redhat-operators-tvblz\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.466017 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-utilities\") pod \"redhat-operators-tvblz\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.466097 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjlsp\" (UniqueName: \"kubernetes.io/projected/770736f7-d373-4c1c-8980-5f71763d1b26-kube-api-access-rjlsp\") pod \"redhat-operators-tvblz\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.466156 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-catalog-content\") pod \"redhat-operators-tvblz\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.466601 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-utilities\") pod \"redhat-operators-tvblz\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.466695 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-catalog-content\") pod \"redhat-operators-tvblz\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.487750 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjlsp\" (UniqueName: \"kubernetes.io/projected/770736f7-d373-4c1c-8980-5f71763d1b26-kube-api-access-rjlsp\") pod \"redhat-operators-tvblz\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.501914 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpbvr" event={"ID":"2b7c02aa-da2a-43db-9985-96ae84d5e3df","Type":"ContainerStarted","Data":"605f2ba96ef18424e7b81baeb7e8b2700b455cc74f6be7344661de1282b640b8"} Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.501954 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpbvr" event={"ID":"2b7c02aa-da2a-43db-9985-96ae84d5e3df","Type":"ContainerStarted","Data":"255fcc7213a1ac02a6f8949b1633657b99eb8a015825871fd49a30b39f2450e0"} Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.501966 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpbvr" event={"ID":"2b7c02aa-da2a-43db-9985-96ae84d5e3df","Type":"ContainerStarted","Data":"d663146f1f1186e1747d26176921389fa7bbf00927d3fffd6d94a1ea4778e98e"} Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.501975 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpbvr" event={"ID":"2b7c02aa-da2a-43db-9985-96ae84d5e3df","Type":"ContainerStarted","Data":"28c4c6f8b7ce4a2cb56cc6c4d46f3b6807921c5214b0f50a721a52f53c9c09c9"} Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.501985 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpbvr" event={"ID":"2b7c02aa-da2a-43db-9985-96ae84d5e3df","Type":"ContainerStarted","Data":"dfa1ca3050932cdc4452e9ed8208359d8bc998e79baaec494a311da062c2b3cd"} Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.501994 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpbvr" event={"ID":"2b7c02aa-da2a-43db-9985-96ae84d5e3df","Type":"ContainerStarted","Data":"6d994ab64e4cd457e12e88d7c0dfd5d2765904a960dc21abac4757c1ecfeb5d2"} Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.502064 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.504171 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2slvn" event={"ID":"3aef4010-162a-40dc-9841-4d0e64d1bae2","Type":"ContainerStarted","Data":"ce7e8387dab203081470f6a624d0b6592ac3cfe541d698c89a099c1c6808caf5"} Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.527116 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xpbvr" podStartSLOduration=5.527454169 podStartE2EDuration="12.527091631s" podCreationTimestamp="2025-11-24 12:11:51 +0000 UTC" firstStartedPulling="2025-11-24 12:11:52.359888127 +0000 UTC m=+758.974216067" lastFinishedPulling="2025-11-24 12:11:59.359525579 +0000 UTC m=+765.973853529" observedRunningTime="2025-11-24 12:12:03.523365242 +0000 UTC m=+770.137693212" watchObservedRunningTime="2025-11-24 12:12:03.527091631 +0000 UTC m=+770.141419581" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.543683 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2slvn" podStartSLOduration=5.14919097 podStartE2EDuration="7.543667392s" podCreationTimestamp="2025-11-24 12:11:56 +0000 UTC" firstStartedPulling="2025-11-24 12:12:00.463029104 +0000 UTC m=+767.077357054" lastFinishedPulling="2025-11-24 12:12:02.857505526 +0000 UTC m=+769.471833476" observedRunningTime="2025-11-24 12:12:03.542395285 +0000 UTC m=+770.156723235" watchObservedRunningTime="2025-11-24 12:12:03.543667392 +0000 UTC m=+770.157995342" Nov 24 12:12:03 crc kubenswrapper[4930]: I1124 12:12:03.587861 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:04 crc kubenswrapper[4930]: I1124 12:12:04.017594 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tvblz"] Nov 24 12:12:04 crc kubenswrapper[4930]: W1124 12:12:04.024261 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod770736f7_d373_4c1c_8980_5f71763d1b26.slice/crio-c4c777f422db51fcffac4716d8ffc1aa4fbe4d9fbc519c731d2ced27779b8073 WatchSource:0}: Error finding container c4c777f422db51fcffac4716d8ffc1aa4fbe4d9fbc519c731d2ced27779b8073: Status 404 returned error can't find the container with id c4c777f422db51fcffac4716d8ffc1aa4fbe4d9fbc519c731d2ced27779b8073 Nov 24 12:12:04 crc kubenswrapper[4930]: I1124 12:12:04.511457 4930 generic.go:334] "Generic (PLEG): container finished" podID="770736f7-d373-4c1c-8980-5f71763d1b26" containerID="8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363" exitCode=0 Nov 24 12:12:04 crc kubenswrapper[4930]: I1124 12:12:04.511566 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvblz" event={"ID":"770736f7-d373-4c1c-8980-5f71763d1b26","Type":"ContainerDied","Data":"8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363"} Nov 24 12:12:04 crc kubenswrapper[4930]: I1124 12:12:04.512731 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvblz" event={"ID":"770736f7-d373-4c1c-8980-5f71763d1b26","Type":"ContainerStarted","Data":"c4c777f422db51fcffac4716d8ffc1aa4fbe4d9fbc519c731d2ced27779b8073"} Nov 24 12:12:05 crc kubenswrapper[4930]: I1124 12:12:05.522345 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvblz" event={"ID":"770736f7-d373-4c1c-8980-5f71763d1b26","Type":"ContainerStarted","Data":"8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1"} Nov 24 12:12:06 crc kubenswrapper[4930]: I1124 12:12:06.529990 4930 generic.go:334] "Generic (PLEG): container finished" podID="770736f7-d373-4c1c-8980-5f71763d1b26" containerID="8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1" exitCode=0 Nov 24 12:12:06 crc kubenswrapper[4930]: I1124 12:12:06.530038 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvblz" event={"ID":"770736f7-d373-4c1c-8980-5f71763d1b26","Type":"ContainerDied","Data":"8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1"} Nov 24 12:12:06 crc kubenswrapper[4930]: I1124 12:12:06.904855 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:12:06 crc kubenswrapper[4930]: I1124 12:12:06.905113 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:12:06 crc kubenswrapper[4930]: I1124 12:12:06.950373 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:12:07 crc kubenswrapper[4930]: I1124 12:12:07.216969 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:12:07 crc kubenswrapper[4930]: I1124 12:12:07.261238 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:12:07 crc kubenswrapper[4930]: I1124 12:12:07.545546 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvblz" event={"ID":"770736f7-d373-4c1c-8980-5f71763d1b26","Type":"ContainerStarted","Data":"c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57"} Nov 24 12:12:07 crc kubenswrapper[4930]: I1124 12:12:07.601180 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tvblz" podStartSLOduration=2.120732597 podStartE2EDuration="4.601163057s" podCreationTimestamp="2025-11-24 12:12:03 +0000 UTC" firstStartedPulling="2025-11-24 12:12:04.51268428 +0000 UTC m=+771.127012240" lastFinishedPulling="2025-11-24 12:12:06.99311475 +0000 UTC m=+773.607442700" observedRunningTime="2025-11-24 12:12:07.598020666 +0000 UTC m=+774.212348616" watchObservedRunningTime="2025-11-24 12:12:07.601163057 +0000 UTC m=+774.215491007" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.205947 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-tdgdd" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.223233 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xpbvr" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.597823 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jdqv5"] Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.599256 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.617363 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdqv5"] Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.689702 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-utilities\") pod \"redhat-marketplace-jdqv5\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.689915 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-catalog-content\") pod \"redhat-marketplace-jdqv5\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.690022 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw2vt\" (UniqueName: \"kubernetes.io/projected/1108bcfa-51a5-4a39-87c8-e980db1779d9-kube-api-access-kw2vt\") pod \"redhat-marketplace-jdqv5\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.791787 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-utilities\") pod \"redhat-marketplace-jdqv5\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.791869 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-catalog-content\") pod \"redhat-marketplace-jdqv5\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.791896 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw2vt\" (UniqueName: \"kubernetes.io/projected/1108bcfa-51a5-4a39-87c8-e980db1779d9-kube-api-access-kw2vt\") pod \"redhat-marketplace-jdqv5\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.792323 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-utilities\") pod \"redhat-marketplace-jdqv5\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.792673 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-catalog-content\") pod \"redhat-marketplace-jdqv5\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:12 crc kubenswrapper[4930]: I1124 12:12:12.922518 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw2vt\" (UniqueName: \"kubernetes.io/projected/1108bcfa-51a5-4a39-87c8-e980db1779d9-kube-api-access-kw2vt\") pod \"redhat-marketplace-jdqv5\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:13 crc kubenswrapper[4930]: I1124 12:12:13.217588 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:13 crc kubenswrapper[4930]: I1124 12:12:13.588108 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:13 crc kubenswrapper[4930]: I1124 12:12:13.588158 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:13 crc kubenswrapper[4930]: I1124 12:12:13.666364 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdqv5"] Nov 24 12:12:13 crc kubenswrapper[4930]: I1124 12:12:13.674571 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:13 crc kubenswrapper[4930]: W1124 12:12:13.705691 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1108bcfa_51a5_4a39_87c8_e980db1779d9.slice/crio-480a3f26d9dee91211caa333623662f03f72361cd5ee899188568ce2b02577eb WatchSource:0}: Error finding container 480a3f26d9dee91211caa333623662f03f72361cd5ee899188568ce2b02577eb: Status 404 returned error can't find the container with id 480a3f26d9dee91211caa333623662f03f72361cd5ee899188568ce2b02577eb Nov 24 12:12:13 crc kubenswrapper[4930]: I1124 12:12:13.818843 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-t7cvk" Nov 24 12:12:14 crc kubenswrapper[4930]: I1124 12:12:14.587447 4930 generic.go:334] "Generic (PLEG): container finished" podID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerID="c8c825586826eb5c489335dfa790cf857b29eacf226c9052cfaf7828172b11db" exitCode=0 Nov 24 12:12:14 crc kubenswrapper[4930]: I1124 12:12:14.587502 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdqv5" event={"ID":"1108bcfa-51a5-4a39-87c8-e980db1779d9","Type":"ContainerDied","Data":"c8c825586826eb5c489335dfa790cf857b29eacf226c9052cfaf7828172b11db"} Nov 24 12:12:14 crc kubenswrapper[4930]: I1124 12:12:14.587574 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdqv5" event={"ID":"1108bcfa-51a5-4a39-87c8-e980db1779d9","Type":"ContainerStarted","Data":"480a3f26d9dee91211caa333623662f03f72361cd5ee899188568ce2b02577eb"} Nov 24 12:12:14 crc kubenswrapper[4930]: I1124 12:12:14.634760 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:15 crc kubenswrapper[4930]: I1124 12:12:15.595830 4930 generic.go:334] "Generic (PLEG): container finished" podID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerID="03f3c7b5ff7a04dc770f07dfd42259769be922a4c1ece6417f9e546171a5e3d4" exitCode=0 Nov 24 12:12:15 crc kubenswrapper[4930]: I1124 12:12:15.595931 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdqv5" event={"ID":"1108bcfa-51a5-4a39-87c8-e980db1779d9","Type":"ContainerDied","Data":"03f3c7b5ff7a04dc770f07dfd42259769be922a4c1ece6417f9e546171a5e3d4"} Nov 24 12:12:15 crc kubenswrapper[4930]: I1124 12:12:15.974848 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tvblz"] Nov 24 12:12:16 crc kubenswrapper[4930]: I1124 12:12:16.603270 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdqv5" event={"ID":"1108bcfa-51a5-4a39-87c8-e980db1779d9","Type":"ContainerStarted","Data":"733e5584aaecb15ff26bd81d548a37a999305c47abcc1ae1fc83e5ae41c15914"} Nov 24 12:12:16 crc kubenswrapper[4930]: I1124 12:12:16.604334 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tvblz" podUID="770736f7-d373-4c1c-8980-5f71763d1b26" containerName="registry-server" containerID="cri-o://c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57" gracePeriod=2 Nov 24 12:12:16 crc kubenswrapper[4930]: I1124 12:12:16.957751 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:12:16 crc kubenswrapper[4930]: I1124 12:12:16.977154 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jdqv5" podStartSLOduration=3.538964384 podStartE2EDuration="4.977129999s" podCreationTimestamp="2025-11-24 12:12:12 +0000 UTC" firstStartedPulling="2025-11-24 12:12:14.590251783 +0000 UTC m=+781.204579733" lastFinishedPulling="2025-11-24 12:12:16.028417398 +0000 UTC m=+782.642745348" observedRunningTime="2025-11-24 12:12:16.633611489 +0000 UTC m=+783.247939449" watchObservedRunningTime="2025-11-24 12:12:16.977129999 +0000 UTC m=+783.591457949" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.035688 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.156654 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-utilities\") pod \"770736f7-d373-4c1c-8980-5f71763d1b26\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.156793 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjlsp\" (UniqueName: \"kubernetes.io/projected/770736f7-d373-4c1c-8980-5f71763d1b26-kube-api-access-rjlsp\") pod \"770736f7-d373-4c1c-8980-5f71763d1b26\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.156835 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-catalog-content\") pod \"770736f7-d373-4c1c-8980-5f71763d1b26\" (UID: \"770736f7-d373-4c1c-8980-5f71763d1b26\") " Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.157725 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-utilities" (OuterVolumeSpecName: "utilities") pod "770736f7-d373-4c1c-8980-5f71763d1b26" (UID: "770736f7-d373-4c1c-8980-5f71763d1b26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.165760 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770736f7-d373-4c1c-8980-5f71763d1b26-kube-api-access-rjlsp" (OuterVolumeSpecName: "kube-api-access-rjlsp") pod "770736f7-d373-4c1c-8980-5f71763d1b26" (UID: "770736f7-d373-4c1c-8980-5f71763d1b26"). InnerVolumeSpecName "kube-api-access-rjlsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.249876 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "770736f7-d373-4c1c-8980-5f71763d1b26" (UID: "770736f7-d373-4c1c-8980-5f71763d1b26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.258284 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.258310 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjlsp\" (UniqueName: \"kubernetes.io/projected/770736f7-d373-4c1c-8980-5f71763d1b26-kube-api-access-rjlsp\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.258322 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/770736f7-d373-4c1c-8980-5f71763d1b26-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.617929 4930 generic.go:334] "Generic (PLEG): container finished" podID="770736f7-d373-4c1c-8980-5f71763d1b26" containerID="c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57" exitCode=0 Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.617998 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tvblz" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.618024 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvblz" event={"ID":"770736f7-d373-4c1c-8980-5f71763d1b26","Type":"ContainerDied","Data":"c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57"} Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.618072 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvblz" event={"ID":"770736f7-d373-4c1c-8980-5f71763d1b26","Type":"ContainerDied","Data":"c4c777f422db51fcffac4716d8ffc1aa4fbe4d9fbc519c731d2ced27779b8073"} Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.618090 4930 scope.go:117] "RemoveContainer" containerID="c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.632450 4930 scope.go:117] "RemoveContainer" containerID="8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.643222 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tvblz"] Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.655487 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tvblz"] Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.674144 4930 scope.go:117] "RemoveContainer" containerID="8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.690160 4930 scope.go:117] "RemoveContainer" containerID="c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57" Nov 24 12:12:17 crc kubenswrapper[4930]: E1124 12:12:17.690663 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57\": container with ID starting with c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57 not found: ID does not exist" containerID="c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.690741 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57"} err="failed to get container status \"c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57\": rpc error: code = NotFound desc = could not find container \"c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57\": container with ID starting with c279ce3b3a38740a579ced2deef5291944339294e0ccbde975e96d5385cbad57 not found: ID does not exist" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.690785 4930 scope.go:117] "RemoveContainer" containerID="8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1" Nov 24 12:12:17 crc kubenswrapper[4930]: E1124 12:12:17.691271 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1\": container with ID starting with 8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1 not found: ID does not exist" containerID="8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.691312 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1"} err="failed to get container status \"8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1\": rpc error: code = NotFound desc = could not find container \"8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1\": container with ID starting with 8f888003906efa451c41cbbca53bb46710a1f3bd71482d42d66407d07e08fee1 not found: ID does not exist" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.691343 4930 scope.go:117] "RemoveContainer" containerID="8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363" Nov 24 12:12:17 crc kubenswrapper[4930]: E1124 12:12:17.691685 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363\": container with ID starting with 8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363 not found: ID does not exist" containerID="8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363" Nov 24 12:12:17 crc kubenswrapper[4930]: I1124 12:12:17.691739 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363"} err="failed to get container status \"8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363\": rpc error: code = NotFound desc = could not find container \"8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363\": container with ID starting with 8ebdcc28d8988d0b7ed5d471eb8893ab06fc1a7d23225507bc090268560e4363 not found: ID does not exist" Nov 24 12:12:18 crc kubenswrapper[4930]: I1124 12:12:18.092487 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="770736f7-d373-4c1c-8980-5f71763d1b26" path="/var/lib/kubelet/pods/770736f7-d373-4c1c-8980-5f71763d1b26/volumes" Nov 24 12:12:22 crc kubenswrapper[4930]: I1124 12:12:22.978873 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5mhrz"] Nov 24 12:12:22 crc kubenswrapper[4930]: E1124 12:12:22.979412 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770736f7-d373-4c1c-8980-5f71763d1b26" containerName="extract-utilities" Nov 24 12:12:22 crc kubenswrapper[4930]: I1124 12:12:22.979424 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="770736f7-d373-4c1c-8980-5f71763d1b26" containerName="extract-utilities" Nov 24 12:12:22 crc kubenswrapper[4930]: E1124 12:12:22.979433 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770736f7-d373-4c1c-8980-5f71763d1b26" containerName="extract-content" Nov 24 12:12:22 crc kubenswrapper[4930]: I1124 12:12:22.979439 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="770736f7-d373-4c1c-8980-5f71763d1b26" containerName="extract-content" Nov 24 12:12:22 crc kubenswrapper[4930]: E1124 12:12:22.979453 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770736f7-d373-4c1c-8980-5f71763d1b26" containerName="registry-server" Nov 24 12:12:22 crc kubenswrapper[4930]: I1124 12:12:22.979458 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="770736f7-d373-4c1c-8980-5f71763d1b26" containerName="registry-server" Nov 24 12:12:22 crc kubenswrapper[4930]: I1124 12:12:22.979577 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="770736f7-d373-4c1c-8980-5f71763d1b26" containerName="registry-server" Nov 24 12:12:22 crc kubenswrapper[4930]: I1124 12:12:22.980016 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5mhrz" Nov 24 12:12:22 crc kubenswrapper[4930]: I1124 12:12:22.981740 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-58cns" Nov 24 12:12:22 crc kubenswrapper[4930]: I1124 12:12:22.985486 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 12:12:22 crc kubenswrapper[4930]: I1124 12:12:22.986577 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 12:12:22 crc kubenswrapper[4930]: I1124 12:12:22.991691 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5mhrz"] Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.039825 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqf9l\" (UniqueName: \"kubernetes.io/projected/df43ee8c-48c3-4014-a134-a3fddf9e8194-kube-api-access-tqf9l\") pod \"openstack-operator-index-5mhrz\" (UID: \"df43ee8c-48c3-4014-a134-a3fddf9e8194\") " pod="openstack-operators/openstack-operator-index-5mhrz" Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.140931 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqf9l\" (UniqueName: \"kubernetes.io/projected/df43ee8c-48c3-4014-a134-a3fddf9e8194-kube-api-access-tqf9l\") pod \"openstack-operator-index-5mhrz\" (UID: \"df43ee8c-48c3-4014-a134-a3fddf9e8194\") " pod="openstack-operators/openstack-operator-index-5mhrz" Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.159236 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqf9l\" (UniqueName: \"kubernetes.io/projected/df43ee8c-48c3-4014-a134-a3fddf9e8194-kube-api-access-tqf9l\") pod \"openstack-operator-index-5mhrz\" (UID: \"df43ee8c-48c3-4014-a134-a3fddf9e8194\") " pod="openstack-operators/openstack-operator-index-5mhrz" Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.218217 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.218265 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.264071 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.293798 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5mhrz" Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.690360 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5mhrz"] Nov 24 12:12:23 crc kubenswrapper[4930]: W1124 12:12:23.703981 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf43ee8c_48c3_4014_a134_a3fddf9e8194.slice/crio-715dfcb93304bdf6f745d6b46bfa5110655b69c115dd16dc5445f0eb482b9fb9 WatchSource:0}: Error finding container 715dfcb93304bdf6f745d6b46bfa5110655b69c115dd16dc5445f0eb482b9fb9: Status 404 returned error can't find the container with id 715dfcb93304bdf6f745d6b46bfa5110655b69c115dd16dc5445f0eb482b9fb9 Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.712882 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.974017 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2slvn"] Nov 24 12:12:23 crc kubenswrapper[4930]: I1124 12:12:23.974266 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2slvn" podUID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerName="registry-server" containerID="cri-o://ce7e8387dab203081470f6a624d0b6592ac3cfe541d698c89a099c1c6808caf5" gracePeriod=2 Nov 24 12:12:24 crc kubenswrapper[4930]: I1124 12:12:24.663037 4930 generic.go:334] "Generic (PLEG): container finished" podID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerID="ce7e8387dab203081470f6a624d0b6592ac3cfe541d698c89a099c1c6808caf5" exitCode=0 Nov 24 12:12:24 crc kubenswrapper[4930]: I1124 12:12:24.663143 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2slvn" event={"ID":"3aef4010-162a-40dc-9841-4d0e64d1bae2","Type":"ContainerDied","Data":"ce7e8387dab203081470f6a624d0b6592ac3cfe541d698c89a099c1c6808caf5"} Nov 24 12:12:24 crc kubenswrapper[4930]: I1124 12:12:24.665320 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5mhrz" event={"ID":"df43ee8c-48c3-4014-a134-a3fddf9e8194","Type":"ContainerStarted","Data":"715dfcb93304bdf6f745d6b46bfa5110655b69c115dd16dc5445f0eb482b9fb9"} Nov 24 12:12:24 crc kubenswrapper[4930]: I1124 12:12:24.879802 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:12:24 crc kubenswrapper[4930]: I1124 12:12:24.970256 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45v6d\" (UniqueName: \"kubernetes.io/projected/3aef4010-162a-40dc-9841-4d0e64d1bae2-kube-api-access-45v6d\") pod \"3aef4010-162a-40dc-9841-4d0e64d1bae2\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " Nov 24 12:12:24 crc kubenswrapper[4930]: I1124 12:12:24.970307 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-catalog-content\") pod \"3aef4010-162a-40dc-9841-4d0e64d1bae2\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " Nov 24 12:12:24 crc kubenswrapper[4930]: I1124 12:12:24.970351 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-utilities\") pod \"3aef4010-162a-40dc-9841-4d0e64d1bae2\" (UID: \"3aef4010-162a-40dc-9841-4d0e64d1bae2\") " Nov 24 12:12:24 crc kubenswrapper[4930]: I1124 12:12:24.971373 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-utilities" (OuterVolumeSpecName: "utilities") pod "3aef4010-162a-40dc-9841-4d0e64d1bae2" (UID: "3aef4010-162a-40dc-9841-4d0e64d1bae2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:24 crc kubenswrapper[4930]: I1124 12:12:24.982731 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aef4010-162a-40dc-9841-4d0e64d1bae2-kube-api-access-45v6d" (OuterVolumeSpecName: "kube-api-access-45v6d") pod "3aef4010-162a-40dc-9841-4d0e64d1bae2" (UID: "3aef4010-162a-40dc-9841-4d0e64d1bae2"). InnerVolumeSpecName "kube-api-access-45v6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.039965 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3aef4010-162a-40dc-9841-4d0e64d1bae2" (UID: "3aef4010-162a-40dc-9841-4d0e64d1bae2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.072061 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45v6d\" (UniqueName: \"kubernetes.io/projected/3aef4010-162a-40dc-9841-4d0e64d1bae2-kube-api-access-45v6d\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.072129 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.072140 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aef4010-162a-40dc-9841-4d0e64d1bae2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.674263 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5mhrz" event={"ID":"df43ee8c-48c3-4014-a134-a3fddf9e8194","Type":"ContainerStarted","Data":"45729208d78306920aa9cb33fa8f2b7b2dbdc6c35a53028b071f06445d4ece5a"} Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.676809 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2slvn" event={"ID":"3aef4010-162a-40dc-9841-4d0e64d1bae2","Type":"ContainerDied","Data":"29468f95b7d466c468780303a2eb64f28359ff3d524b44f4d8fad81dc91614ae"} Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.676844 4930 scope.go:117] "RemoveContainer" containerID="ce7e8387dab203081470f6a624d0b6592ac3cfe541d698c89a099c1c6808caf5" Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.676907 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2slvn" Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.696240 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5mhrz" podStartSLOduration=2.842761034 podStartE2EDuration="3.69621401s" podCreationTimestamp="2025-11-24 12:12:22 +0000 UTC" firstStartedPulling="2025-11-24 12:12:23.706767317 +0000 UTC m=+790.321095277" lastFinishedPulling="2025-11-24 12:12:24.560220303 +0000 UTC m=+791.174548253" observedRunningTime="2025-11-24 12:12:25.68787982 +0000 UTC m=+792.302207860" watchObservedRunningTime="2025-11-24 12:12:25.69621401 +0000 UTC m=+792.310541960" Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.697313 4930 scope.go:117] "RemoveContainer" containerID="c4f04663b292ae8a3dceb1c86de5e754e6c5355c8247eab2bafdb23bc5247b1d" Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.718999 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2slvn"] Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.728520 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2slvn"] Nov 24 12:12:25 crc kubenswrapper[4930]: I1124 12:12:25.732119 4930 scope.go:117] "RemoveContainer" containerID="206536122fe30125ae8bfb563b241169a815613e9ea91ee980f11e70450397a1" Nov 24 12:12:26 crc kubenswrapper[4930]: I1124 12:12:26.092207 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aef4010-162a-40dc-9841-4d0e64d1bae2" path="/var/lib/kubelet/pods/3aef4010-162a-40dc-9841-4d0e64d1bae2/volumes" Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.372256 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdqv5"] Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.372551 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jdqv5" podUID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerName="registry-server" containerID="cri-o://733e5584aaecb15ff26bd81d548a37a999305c47abcc1ae1fc83e5ae41c15914" gracePeriod=2 Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.693861 4930 generic.go:334] "Generic (PLEG): container finished" podID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerID="733e5584aaecb15ff26bd81d548a37a999305c47abcc1ae1fc83e5ae41c15914" exitCode=0 Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.693897 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdqv5" event={"ID":"1108bcfa-51a5-4a39-87c8-e980db1779d9","Type":"ContainerDied","Data":"733e5584aaecb15ff26bd81d548a37a999305c47abcc1ae1fc83e5ae41c15914"} Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.765378 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.910639 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw2vt\" (UniqueName: \"kubernetes.io/projected/1108bcfa-51a5-4a39-87c8-e980db1779d9-kube-api-access-kw2vt\") pod \"1108bcfa-51a5-4a39-87c8-e980db1779d9\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.911213 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-utilities\") pod \"1108bcfa-51a5-4a39-87c8-e980db1779d9\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.911500 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-catalog-content\") pod \"1108bcfa-51a5-4a39-87c8-e980db1779d9\" (UID: \"1108bcfa-51a5-4a39-87c8-e980db1779d9\") " Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.912056 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-utilities" (OuterVolumeSpecName: "utilities") pod "1108bcfa-51a5-4a39-87c8-e980db1779d9" (UID: "1108bcfa-51a5-4a39-87c8-e980db1779d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.917006 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1108bcfa-51a5-4a39-87c8-e980db1779d9-kube-api-access-kw2vt" (OuterVolumeSpecName: "kube-api-access-kw2vt") pod "1108bcfa-51a5-4a39-87c8-e980db1779d9" (UID: "1108bcfa-51a5-4a39-87c8-e980db1779d9"). InnerVolumeSpecName "kube-api-access-kw2vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:27 crc kubenswrapper[4930]: I1124 12:12:27.928096 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1108bcfa-51a5-4a39-87c8-e980db1779d9" (UID: "1108bcfa-51a5-4a39-87c8-e980db1779d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:28 crc kubenswrapper[4930]: I1124 12:12:28.013532 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:28 crc kubenswrapper[4930]: I1124 12:12:28.013595 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw2vt\" (UniqueName: \"kubernetes.io/projected/1108bcfa-51a5-4a39-87c8-e980db1779d9-kube-api-access-kw2vt\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:28 crc kubenswrapper[4930]: I1124 12:12:28.013611 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1108bcfa-51a5-4a39-87c8-e980db1779d9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:28 crc kubenswrapper[4930]: I1124 12:12:28.709471 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdqv5" event={"ID":"1108bcfa-51a5-4a39-87c8-e980db1779d9","Type":"ContainerDied","Data":"480a3f26d9dee91211caa333623662f03f72361cd5ee899188568ce2b02577eb"} Nov 24 12:12:28 crc kubenswrapper[4930]: I1124 12:12:28.710558 4930 scope.go:117] "RemoveContainer" containerID="733e5584aaecb15ff26bd81d548a37a999305c47abcc1ae1fc83e5ae41c15914" Nov 24 12:12:28 crc kubenswrapper[4930]: I1124 12:12:28.709598 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdqv5" Nov 24 12:12:28 crc kubenswrapper[4930]: I1124 12:12:28.734933 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdqv5"] Nov 24 12:12:28 crc kubenswrapper[4930]: I1124 12:12:28.739085 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdqv5"] Nov 24 12:12:28 crc kubenswrapper[4930]: I1124 12:12:28.744364 4930 scope.go:117] "RemoveContainer" containerID="03f3c7b5ff7a04dc770f07dfd42259769be922a4c1ece6417f9e546171a5e3d4" Nov 24 12:12:28 crc kubenswrapper[4930]: I1124 12:12:28.761880 4930 scope.go:117] "RemoveContainer" containerID="c8c825586826eb5c489335dfa790cf857b29eacf226c9052cfaf7828172b11db" Nov 24 12:12:30 crc kubenswrapper[4930]: I1124 12:12:30.098317 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1108bcfa-51a5-4a39-87c8-e980db1779d9" path="/var/lib/kubelet/pods/1108bcfa-51a5-4a39-87c8-e980db1779d9/volumes" Nov 24 12:12:31 crc kubenswrapper[4930]: I1124 12:12:31.809131 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:12:31 crc kubenswrapper[4930]: I1124 12:12:31.809196 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:12:33 crc kubenswrapper[4930]: I1124 12:12:33.294514 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5mhrz" Nov 24 12:12:33 crc kubenswrapper[4930]: I1124 12:12:33.294611 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5mhrz" Nov 24 12:12:33 crc kubenswrapper[4930]: I1124 12:12:33.323288 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5mhrz" Nov 24 12:12:33 crc kubenswrapper[4930]: I1124 12:12:33.764981 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5mhrz" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.781699 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lv7bv"] Nov 24 12:12:34 crc kubenswrapper[4930]: E1124 12:12:34.783556 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerName="registry-server" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.783649 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerName="registry-server" Nov 24 12:12:34 crc kubenswrapper[4930]: E1124 12:12:34.783739 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerName="extract-content" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.783824 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerName="extract-content" Nov 24 12:12:34 crc kubenswrapper[4930]: E1124 12:12:34.783902 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerName="extract-utilities" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.783977 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerName="extract-utilities" Nov 24 12:12:34 crc kubenswrapper[4930]: E1124 12:12:34.784112 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerName="extract-content" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.784172 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerName="extract-content" Nov 24 12:12:34 crc kubenswrapper[4930]: E1124 12:12:34.784234 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerName="extract-utilities" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.784293 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerName="extract-utilities" Nov 24 12:12:34 crc kubenswrapper[4930]: E1124 12:12:34.784349 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerName="registry-server" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.784400 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerName="registry-server" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.784597 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="1108bcfa-51a5-4a39-87c8-e980db1779d9" containerName="registry-server" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.784698 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aef4010-162a-40dc-9841-4d0e64d1bae2" containerName="registry-server" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.787069 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.798131 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lv7bv"] Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.907315 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zbvl\" (UniqueName: \"kubernetes.io/projected/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-kube-api-access-9zbvl\") pod \"certified-operators-lv7bv\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.907365 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-catalog-content\") pod \"certified-operators-lv7bv\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:34 crc kubenswrapper[4930]: I1124 12:12:34.907400 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-utilities\") pod \"certified-operators-lv7bv\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.009179 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zbvl\" (UniqueName: \"kubernetes.io/projected/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-kube-api-access-9zbvl\") pod \"certified-operators-lv7bv\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.009232 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-catalog-content\") pod \"certified-operators-lv7bv\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.009262 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-utilities\") pod \"certified-operators-lv7bv\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.009760 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-utilities\") pod \"certified-operators-lv7bv\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.009840 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-catalog-content\") pod \"certified-operators-lv7bv\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.030913 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zbvl\" (UniqueName: \"kubernetes.io/projected/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-kube-api-access-9zbvl\") pod \"certified-operators-lv7bv\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.106996 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.532123 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lv7bv"] Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.750597 4930 generic.go:334] "Generic (PLEG): container finished" podID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerID="808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f" exitCode=0 Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.750629 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lv7bv" event={"ID":"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19","Type":"ContainerDied","Data":"808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f"} Nov 24 12:12:35 crc kubenswrapper[4930]: I1124 12:12:35.750672 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lv7bv" event={"ID":"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19","Type":"ContainerStarted","Data":"a71976c16e289c6aefce1434f210ed65205dc365abfa766e251bbbb1adb4f249"} Nov 24 12:12:36 crc kubenswrapper[4930]: I1124 12:12:36.758601 4930 generic.go:334] "Generic (PLEG): container finished" podID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerID="17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42" exitCode=0 Nov 24 12:12:36 crc kubenswrapper[4930]: I1124 12:12:36.758655 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lv7bv" event={"ID":"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19","Type":"ContainerDied","Data":"17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42"} Nov 24 12:12:37 crc kubenswrapper[4930]: I1124 12:12:37.767285 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lv7bv" event={"ID":"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19","Type":"ContainerStarted","Data":"e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49"} Nov 24 12:12:37 crc kubenswrapper[4930]: I1124 12:12:37.797667 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lv7bv" podStartSLOduration=2.363801099 podStartE2EDuration="3.797651799s" podCreationTimestamp="2025-11-24 12:12:34 +0000 UTC" firstStartedPulling="2025-11-24 12:12:35.752255914 +0000 UTC m=+802.366583864" lastFinishedPulling="2025-11-24 12:12:37.186106614 +0000 UTC m=+803.800434564" observedRunningTime="2025-11-24 12:12:37.797316819 +0000 UTC m=+804.411644769" watchObservedRunningTime="2025-11-24 12:12:37.797651799 +0000 UTC m=+804.411979749" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.020894 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg"] Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.022119 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.024193 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hszds" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.032678 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg"] Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.155828 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.155909 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr66t\" (UniqueName: \"kubernetes.io/projected/7294a2f2-e7f6-489a-8520-a079269ea728-kube-api-access-gr66t\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.156019 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.257811 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.257929 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.257970 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr66t\" (UniqueName: \"kubernetes.io/projected/7294a2f2-e7f6-489a-8520-a079269ea728-kube-api-access-gr66t\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.258395 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.258671 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.277045 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr66t\" (UniqueName: \"kubernetes.io/projected/7294a2f2-e7f6-489a-8520-a079269ea728-kube-api-access-gr66t\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.341827 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:38 crc kubenswrapper[4930]: I1124 12:12:38.793465 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg"] Nov 24 12:12:38 crc kubenswrapper[4930]: W1124 12:12:38.802754 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7294a2f2_e7f6_489a_8520_a079269ea728.slice/crio-1a30b3e3f82a465d57ecddec0e985cd5b391793c75ee30f364d55087b1dec05d WatchSource:0}: Error finding container 1a30b3e3f82a465d57ecddec0e985cd5b391793c75ee30f364d55087b1dec05d: Status 404 returned error can't find the container with id 1a30b3e3f82a465d57ecddec0e985cd5b391793c75ee30f364d55087b1dec05d Nov 24 12:12:39 crc kubenswrapper[4930]: I1124 12:12:39.782265 4930 generic.go:334] "Generic (PLEG): container finished" podID="7294a2f2-e7f6-489a-8520-a079269ea728" containerID="e93b0bb42c99b7e2069266b067993b0d2b514f462bdfadb062c74edda8613259" exitCode=0 Nov 24 12:12:39 crc kubenswrapper[4930]: I1124 12:12:39.782558 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" event={"ID":"7294a2f2-e7f6-489a-8520-a079269ea728","Type":"ContainerDied","Data":"e93b0bb42c99b7e2069266b067993b0d2b514f462bdfadb062c74edda8613259"} Nov 24 12:12:39 crc kubenswrapper[4930]: I1124 12:12:39.782589 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" event={"ID":"7294a2f2-e7f6-489a-8520-a079269ea728","Type":"ContainerStarted","Data":"1a30b3e3f82a465d57ecddec0e985cd5b391793c75ee30f364d55087b1dec05d"} Nov 24 12:12:40 crc kubenswrapper[4930]: I1124 12:12:40.788525 4930 generic.go:334] "Generic (PLEG): container finished" podID="7294a2f2-e7f6-489a-8520-a079269ea728" containerID="deab135e23d8f1fae3b85aede5d6f70bde3c1575c3bca140e2e4cc861780d966" exitCode=0 Nov 24 12:12:40 crc kubenswrapper[4930]: I1124 12:12:40.788729 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" event={"ID":"7294a2f2-e7f6-489a-8520-a079269ea728","Type":"ContainerDied","Data":"deab135e23d8f1fae3b85aede5d6f70bde3c1575c3bca140e2e4cc861780d966"} Nov 24 12:12:41 crc kubenswrapper[4930]: I1124 12:12:41.796321 4930 generic.go:334] "Generic (PLEG): container finished" podID="7294a2f2-e7f6-489a-8520-a079269ea728" containerID="6d9546f21014e664457ed54dc08d27eb651fc95d7082437465eeb2f22e79f23b" exitCode=0 Nov 24 12:12:41 crc kubenswrapper[4930]: I1124 12:12:41.796371 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" event={"ID":"7294a2f2-e7f6-489a-8520-a079269ea728","Type":"ContainerDied","Data":"6d9546f21014e664457ed54dc08d27eb651fc95d7082437465eeb2f22e79f23b"} Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.089673 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.222995 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-bundle\") pod \"7294a2f2-e7f6-489a-8520-a079269ea728\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.223335 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr66t\" (UniqueName: \"kubernetes.io/projected/7294a2f2-e7f6-489a-8520-a079269ea728-kube-api-access-gr66t\") pod \"7294a2f2-e7f6-489a-8520-a079269ea728\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.223426 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-util\") pod \"7294a2f2-e7f6-489a-8520-a079269ea728\" (UID: \"7294a2f2-e7f6-489a-8520-a079269ea728\") " Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.223925 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-bundle" (OuterVolumeSpecName: "bundle") pod "7294a2f2-e7f6-489a-8520-a079269ea728" (UID: "7294a2f2-e7f6-489a-8520-a079269ea728"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.228180 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7294a2f2-e7f6-489a-8520-a079269ea728-kube-api-access-gr66t" (OuterVolumeSpecName: "kube-api-access-gr66t") pod "7294a2f2-e7f6-489a-8520-a079269ea728" (UID: "7294a2f2-e7f6-489a-8520-a079269ea728"). InnerVolumeSpecName "kube-api-access-gr66t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.239351 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-util" (OuterVolumeSpecName: "util") pod "7294a2f2-e7f6-489a-8520-a079269ea728" (UID: "7294a2f2-e7f6-489a-8520-a079269ea728"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.324921 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr66t\" (UniqueName: \"kubernetes.io/projected/7294a2f2-e7f6-489a-8520-a079269ea728-kube-api-access-gr66t\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.324957 4930 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-util\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.324970 4930 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7294a2f2-e7f6-489a-8520-a079269ea728-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.810030 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" event={"ID":"7294a2f2-e7f6-489a-8520-a079269ea728","Type":"ContainerDied","Data":"1a30b3e3f82a465d57ecddec0e985cd5b391793c75ee30f364d55087b1dec05d"} Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.810387 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a30b3e3f82a465d57ecddec0e985cd5b391793c75ee30f364d55087b1dec05d" Nov 24 12:12:43 crc kubenswrapper[4930]: I1124 12:12:43.810346 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg" Nov 24 12:12:45 crc kubenswrapper[4930]: I1124 12:12:45.107568 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:45 crc kubenswrapper[4930]: I1124 12:12:45.107615 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:45 crc kubenswrapper[4930]: I1124 12:12:45.146070 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:45 crc kubenswrapper[4930]: I1124 12:12:45.867080 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:47 crc kubenswrapper[4930]: I1124 12:12:47.574256 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lv7bv"] Nov 24 12:12:47 crc kubenswrapper[4930]: I1124 12:12:47.832179 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lv7bv" podUID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerName="registry-server" containerID="cri-o://e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49" gracePeriod=2 Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.230794 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.294034 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zbvl\" (UniqueName: \"kubernetes.io/projected/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-kube-api-access-9zbvl\") pod \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.294102 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-utilities\") pod \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.294169 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-catalog-content\") pod \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\" (UID: \"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19\") " Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.299478 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-utilities" (OuterVolumeSpecName: "utilities") pod "bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" (UID: "bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.303953 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-kube-api-access-9zbvl" (OuterVolumeSpecName: "kube-api-access-9zbvl") pod "bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" (UID: "bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19"). InnerVolumeSpecName "kube-api-access-9zbvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.359243 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" (UID: "bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.395687 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.395747 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zbvl\" (UniqueName: \"kubernetes.io/projected/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-kube-api-access-9zbvl\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.395761 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.840472 4930 generic.go:334] "Generic (PLEG): container finished" podID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerID="e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49" exitCode=0 Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.840522 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lv7bv" event={"ID":"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19","Type":"ContainerDied","Data":"e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49"} Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.840572 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lv7bv" event={"ID":"bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19","Type":"ContainerDied","Data":"a71976c16e289c6aefce1434f210ed65205dc365abfa766e251bbbb1adb4f249"} Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.840591 4930 scope.go:117] "RemoveContainer" containerID="e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.840527 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lv7bv" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.869817 4930 scope.go:117] "RemoveContainer" containerID="17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.870169 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lv7bv"] Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.875651 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lv7bv"] Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.900580 4930 scope.go:117] "RemoveContainer" containerID="808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.916283 4930 scope.go:117] "RemoveContainer" containerID="e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49" Nov 24 12:12:48 crc kubenswrapper[4930]: E1124 12:12:48.917398 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49\": container with ID starting with e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49 not found: ID does not exist" containerID="e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.917436 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49"} err="failed to get container status \"e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49\": rpc error: code = NotFound desc = could not find container \"e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49\": container with ID starting with e65d25fb5d88ab0db490398adec8aa3f3f8d21d59b56367848bb4e3ca4ec3d49 not found: ID does not exist" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.917466 4930 scope.go:117] "RemoveContainer" containerID="17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42" Nov 24 12:12:48 crc kubenswrapper[4930]: E1124 12:12:48.917822 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42\": container with ID starting with 17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42 not found: ID does not exist" containerID="17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.917849 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42"} err="failed to get container status \"17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42\": rpc error: code = NotFound desc = could not find container \"17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42\": container with ID starting with 17569f1596e8c81d0db0609d595a5861275eb819cc7a647fc9c0a94c637e4c42 not found: ID does not exist" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.917867 4930 scope.go:117] "RemoveContainer" containerID="808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f" Nov 24 12:12:48 crc kubenswrapper[4930]: E1124 12:12:48.918136 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f\": container with ID starting with 808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f not found: ID does not exist" containerID="808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f" Nov 24 12:12:48 crc kubenswrapper[4930]: I1124 12:12:48.918164 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f"} err="failed to get container status \"808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f\": rpc error: code = NotFound desc = could not find container \"808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f\": container with ID starting with 808b0cbb8b93a4989823ecf6eafc62459f9beaf1ea25beb1a5dbb4b0356f4d1f not found: ID does not exist" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.092404 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" path="/var/lib/kubelet/pods/bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19/volumes" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.097959 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l"] Nov 24 12:12:50 crc kubenswrapper[4930]: E1124 12:12:50.098381 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7294a2f2-e7f6-489a-8520-a079269ea728" containerName="pull" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.098405 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7294a2f2-e7f6-489a-8520-a079269ea728" containerName="pull" Nov 24 12:12:50 crc kubenswrapper[4930]: E1124 12:12:50.098427 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerName="extract-content" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.098440 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerName="extract-content" Nov 24 12:12:50 crc kubenswrapper[4930]: E1124 12:12:50.098461 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7294a2f2-e7f6-489a-8520-a079269ea728" containerName="util" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.098473 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7294a2f2-e7f6-489a-8520-a079269ea728" containerName="util" Nov 24 12:12:50 crc kubenswrapper[4930]: E1124 12:12:50.098488 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7294a2f2-e7f6-489a-8520-a079269ea728" containerName="extract" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.098503 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7294a2f2-e7f6-489a-8520-a079269ea728" containerName="extract" Nov 24 12:12:50 crc kubenswrapper[4930]: E1124 12:12:50.098523 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerName="extract-utilities" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.098558 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerName="extract-utilities" Nov 24 12:12:50 crc kubenswrapper[4930]: E1124 12:12:50.098581 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerName="registry-server" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.098591 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerName="registry-server" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.098770 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd94ae6-0a7d-4bfc-8b24-b6c3f8161e19" containerName="registry-server" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.098793 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7294a2f2-e7f6-489a-8520-a079269ea728" containerName="extract" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.099873 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.105742 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-tzqwz" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.194526 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l"] Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.218202 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvww\" (UniqueName: \"kubernetes.io/projected/a258ca7d-5a5d-477b-919c-e770ab7fa9cd-kube-api-access-ssvww\") pod \"openstack-operator-controller-operator-8486c7f98b-v5s6l\" (UID: \"a258ca7d-5a5d-477b-919c-e770ab7fa9cd\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.319203 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvww\" (UniqueName: \"kubernetes.io/projected/a258ca7d-5a5d-477b-919c-e770ab7fa9cd-kube-api-access-ssvww\") pod \"openstack-operator-controller-operator-8486c7f98b-v5s6l\" (UID: \"a258ca7d-5a5d-477b-919c-e770ab7fa9cd\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.339444 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvww\" (UniqueName: \"kubernetes.io/projected/a258ca7d-5a5d-477b-919c-e770ab7fa9cd-kube-api-access-ssvww\") pod \"openstack-operator-controller-operator-8486c7f98b-v5s6l\" (UID: \"a258ca7d-5a5d-477b-919c-e770ab7fa9cd\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.415822 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" Nov 24 12:12:50 crc kubenswrapper[4930]: I1124 12:12:50.861093 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l"] Nov 24 12:12:51 crc kubenswrapper[4930]: I1124 12:12:51.861083 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" event={"ID":"a258ca7d-5a5d-477b-919c-e770ab7fa9cd","Type":"ContainerStarted","Data":"ae81a31d3e341a069ac345d6fce90fba99934d5686c64e37c2c27d299bddbf38"} Nov 24 12:12:56 crc kubenswrapper[4930]: I1124 12:12:56.892269 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" event={"ID":"a258ca7d-5a5d-477b-919c-e770ab7fa9cd","Type":"ContainerStarted","Data":"54f70bc465cc15d4b1a78ed49cc175991ce4d44dc43b6240df93b7bfadf073d2"} Nov 24 12:12:58 crc kubenswrapper[4930]: I1124 12:12:58.908917 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" event={"ID":"a258ca7d-5a5d-477b-919c-e770ab7fa9cd","Type":"ContainerStarted","Data":"cf7150787f7e1e0c6a17046474e2d6cf1467a7e2c65933002956d6adfd17a547"} Nov 24 12:12:58 crc kubenswrapper[4930]: I1124 12:12:58.909277 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" Nov 24 12:12:58 crc kubenswrapper[4930]: I1124 12:12:58.947197 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" podStartSLOduration=1.331932938 podStartE2EDuration="8.947177868s" podCreationTimestamp="2025-11-24 12:12:50 +0000 UTC" firstStartedPulling="2025-11-24 12:12:50.871730646 +0000 UTC m=+817.486058586" lastFinishedPulling="2025-11-24 12:12:58.486975566 +0000 UTC m=+825.101303516" observedRunningTime="2025-11-24 12:12:58.941903356 +0000 UTC m=+825.556231356" watchObservedRunningTime="2025-11-24 12:12:58.947177868 +0000 UTC m=+825.561505838" Nov 24 12:13:01 crc kubenswrapper[4930]: I1124 12:13:01.808960 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:13:01 crc kubenswrapper[4930]: I1124 12:13:01.809280 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:13:10 crc kubenswrapper[4930]: I1124 12:13:10.419406 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-v5s6l" Nov 24 12:13:31 crc kubenswrapper[4930]: I1124 12:13:31.809562 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:13:31 crc kubenswrapper[4930]: I1124 12:13:31.810230 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:13:31 crc kubenswrapper[4930]: I1124 12:13:31.810305 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:13:31 crc kubenswrapper[4930]: I1124 12:13:31.811040 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c44ba46dc50db3a20b23969f9cbea1fb9792d70b783114e4cab0eaa15b434f1d"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:13:31 crc kubenswrapper[4930]: I1124 12:13:31.811103 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://c44ba46dc50db3a20b23969f9cbea1fb9792d70b783114e4cab0eaa15b434f1d" gracePeriod=600 Nov 24 12:13:32 crc kubenswrapper[4930]: I1124 12:13:32.107700 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="c44ba46dc50db3a20b23969f9cbea1fb9792d70b783114e4cab0eaa15b434f1d" exitCode=0 Nov 24 12:13:32 crc kubenswrapper[4930]: I1124 12:13:32.107762 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"c44ba46dc50db3a20b23969f9cbea1fb9792d70b783114e4cab0eaa15b434f1d"} Nov 24 12:13:32 crc kubenswrapper[4930]: I1124 12:13:32.108037 4930 scope.go:117] "RemoveContainer" containerID="a8f379626591aee6b54cbd3b52ff203403645d621f59c13e50ebe6f8ffb4735c" Nov 24 12:13:33 crc kubenswrapper[4930]: I1124 12:13:33.115784 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"df660b89ae8561454b3d98787dfb50644dbca73ff06ad5c87819e47a0f113710"} Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.146124 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.163265 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.164546 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.167177 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.180172 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.182060 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.195854 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-p95m6" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.196243 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qf92c" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.196468 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-h8wfz" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.197027 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.204099 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.216661 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.242910 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.244001 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.246891 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-8b7rh" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.263696 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.275614 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.276874 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.287596 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7vddc" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.294164 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2k6q\" (UniqueName: \"kubernetes.io/projected/cf778eca-e1fc-4619-9a85-aeda0fac014b-kube-api-access-f2k6q\") pod \"cinder-operator-controller-manager-6d8fd67bf7-56s9w\" (UID: \"cf778eca-e1fc-4619-9a85-aeda0fac014b\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.294215 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvrd\" (UniqueName: \"kubernetes.io/projected/0752fe04-d0ea-4225-8e86-62c70618a5a1-kube-api-access-2zvrd\") pod \"barbican-operator-controller-manager-7768f8c84f-wqr7x\" (UID: \"0752fe04-d0ea-4225-8e86-62c70618a5a1\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.294299 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn2bv\" (UniqueName: \"kubernetes.io/projected/96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b-kube-api-access-xn2bv\") pod \"designate-operator-controller-manager-56dfb6b67f-wn7d4\" (UID: \"96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.323649 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.324423 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.329489 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.330875 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.339912 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.341374 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.342201 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x8f2r" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.342307 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.342199 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.343124 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-l8qrt" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.349753 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-kk7qr" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.351598 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.379296 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.384259 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.395267 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2k6q\" (UniqueName: \"kubernetes.io/projected/cf778eca-e1fc-4619-9a85-aeda0fac014b-kube-api-access-f2k6q\") pod \"cinder-operator-controller-manager-6d8fd67bf7-56s9w\" (UID: \"cf778eca-e1fc-4619-9a85-aeda0fac014b\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.395317 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zvrd\" (UniqueName: \"kubernetes.io/projected/0752fe04-d0ea-4225-8e86-62c70618a5a1-kube-api-access-2zvrd\") pod \"barbican-operator-controller-manager-7768f8c84f-wqr7x\" (UID: \"0752fe04-d0ea-4225-8e86-62c70618a5a1\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.395353 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8ptj\" (UniqueName: \"kubernetes.io/projected/525584f5-a41b-4189-986d-32f6c4e6bc16-kube-api-access-s8ptj\") pod \"heat-operator-controller-manager-bf4c6585d-22kp5\" (UID: \"525584f5-a41b-4189-986d-32f6c4e6bc16\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.395397 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvbd\" (UniqueName: \"kubernetes.io/projected/2115a6ba-c1ea-45f6-a340-7ccd67a77bbd-kube-api-access-jlvbd\") pod \"glance-operator-controller-manager-8667fbf6f6-2jhpd\" (UID: \"2115a6ba-c1ea-45f6-a340-7ccd67a77bbd\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.395455 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn2bv\" (UniqueName: \"kubernetes.io/projected/96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b-kube-api-access-xn2bv\") pod \"designate-operator-controller-manager-56dfb6b67f-wn7d4\" (UID: \"96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.403373 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.404705 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.405201 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.406812 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vqtxz" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.407963 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.412816 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.413650 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-xw8mk" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.414172 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.418746 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nmtsm" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.431704 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.443004 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2k6q\" (UniqueName: \"kubernetes.io/projected/cf778eca-e1fc-4619-9a85-aeda0fac014b-kube-api-access-f2k6q\") pod \"cinder-operator-controller-manager-6d8fd67bf7-56s9w\" (UID: \"cf778eca-e1fc-4619-9a85-aeda0fac014b\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.443022 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zvrd\" (UniqueName: \"kubernetes.io/projected/0752fe04-d0ea-4225-8e86-62c70618a5a1-kube-api-access-2zvrd\") pod \"barbican-operator-controller-manager-7768f8c84f-wqr7x\" (UID: \"0752fe04-d0ea-4225-8e86-62c70618a5a1\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.443702 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn2bv\" (UniqueName: \"kubernetes.io/projected/96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b-kube-api-access-xn2bv\") pod \"designate-operator-controller-manager-56dfb6b67f-wn7d4\" (UID: \"96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.461574 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.474349 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.475678 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.480037 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mnc2q" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.486066 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.496259 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.497244 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58s5c\" (UniqueName: \"kubernetes.io/projected/2652d83c-0fb2-41a7-a372-2f8e48ea33cc-kube-api-access-58s5c\") pod \"ironic-operator-controller-manager-5c75d7c94b-ngxgx\" (UID: \"2652d83c-0fb2-41a7-a372-2f8e48ea33cc\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.497291 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/606a5459-832e-4986-a171-4fd89e3ee1ec-cert\") pod \"infra-operator-controller-manager-769d9c7585-z7ftj\" (UID: \"606a5459-832e-4986-a171-4fd89e3ee1ec\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.497337 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8ptj\" (UniqueName: \"kubernetes.io/projected/525584f5-a41b-4189-986d-32f6c4e6bc16-kube-api-access-s8ptj\") pod \"heat-operator-controller-manager-bf4c6585d-22kp5\" (UID: \"525584f5-a41b-4189-986d-32f6c4e6bc16\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.499545 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvbd\" (UniqueName: \"kubernetes.io/projected/2115a6ba-c1ea-45f6-a340-7ccd67a77bbd-kube-api-access-jlvbd\") pod \"glance-operator-controller-manager-8667fbf6f6-2jhpd\" (UID: \"2115a6ba-c1ea-45f6-a340-7ccd67a77bbd\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.499635 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk4d8\" (UniqueName: \"kubernetes.io/projected/606a5459-832e-4986-a171-4fd89e3ee1ec-kube-api-access-lk4d8\") pod \"infra-operator-controller-manager-769d9c7585-z7ftj\" (UID: \"606a5459-832e-4986-a171-4fd89e3ee1ec\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.499669 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjdr9\" (UniqueName: \"kubernetes.io/projected/5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e-kube-api-access-cjdr9\") pod \"horizon-operator-controller-manager-5d86b44686-4svhq\" (UID: \"5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.512630 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.513693 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.515891 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-c78t5" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.535790 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.536867 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.538266 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.547895 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-4wxpm" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.552286 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.555651 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.609099 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.609434 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvbd\" (UniqueName: \"kubernetes.io/projected/2115a6ba-c1ea-45f6-a340-7ccd67a77bbd-kube-api-access-jlvbd\") pod \"glance-operator-controller-manager-8667fbf6f6-2jhpd\" (UID: \"2115a6ba-c1ea-45f6-a340-7ccd67a77bbd\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.609761 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58s5c\" (UniqueName: \"kubernetes.io/projected/2652d83c-0fb2-41a7-a372-2f8e48ea33cc-kube-api-access-58s5c\") pod \"ironic-operator-controller-manager-5c75d7c94b-ngxgx\" (UID: \"2652d83c-0fb2-41a7-a372-2f8e48ea33cc\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.609804 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6fk\" (UniqueName: \"kubernetes.io/projected/39e1c56a-84c3-4f33-a16d-77c62d65cd0f-kube-api-access-mv6fk\") pod \"neutron-operator-controller-manager-66b7d6f598-g2cfx\" (UID: \"39e1c56a-84c3-4f33-a16d-77c62d65cd0f\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.609832 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/606a5459-832e-4986-a171-4fd89e3ee1ec-cert\") pod \"infra-operator-controller-manager-769d9c7585-z7ftj\" (UID: \"606a5459-832e-4986-a171-4fd89e3ee1ec\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.609879 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5jr4\" (UniqueName: \"kubernetes.io/projected/37344a1b-ea4d-4dcf-a803-3811a5626106-kube-api-access-v5jr4\") pod \"keystone-operator-controller-manager-7879fb76fd-d9wbt\" (UID: \"37344a1b-ea4d-4dcf-a803-3811a5626106\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.609910 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2ddk\" (UniqueName: \"kubernetes.io/projected/a60dc80f-2382-4901-a79e-1468759d9281-kube-api-access-p2ddk\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-kdw5m\" (UID: \"a60dc80f-2382-4901-a79e-1468759d9281\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.609948 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pw9s\" (UniqueName: \"kubernetes.io/projected/4b01f462-8bc8-4f01-ac0c-76452c353177-kube-api-access-4pw9s\") pod \"manila-operator-controller-manager-7bb88cb858-4ffrf\" (UID: \"4b01f462-8bc8-4f01-ac0c-76452c353177\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.609968 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk4d8\" (UniqueName: \"kubernetes.io/projected/606a5459-832e-4986-a171-4fd89e3ee1ec-kube-api-access-lk4d8\") pod \"infra-operator-controller-manager-769d9c7585-z7ftj\" (UID: \"606a5459-832e-4986-a171-4fd89e3ee1ec\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.609999 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjdr9\" (UniqueName: \"kubernetes.io/projected/5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e-kube-api-access-cjdr9\") pod \"horizon-operator-controller-manager-5d86b44686-4svhq\" (UID: \"5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.610497 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.611196 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8ptj\" (UniqueName: \"kubernetes.io/projected/525584f5-a41b-4189-986d-32f6c4e6bc16-kube-api-access-s8ptj\") pod \"heat-operator-controller-manager-bf4c6585d-22kp5\" (UID: \"525584f5-a41b-4189-986d-32f6c4e6bc16\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.612166 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.618250 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/606a5459-832e-4986-a171-4fd89e3ee1ec-cert\") pod \"infra-operator-controller-manager-769d9c7585-z7ftj\" (UID: \"606a5459-832e-4986-a171-4fd89e3ee1ec\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.619192 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.626616 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.628783 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.638921 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-l7d7n" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.649371 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjdr9\" (UniqueName: \"kubernetes.io/projected/5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e-kube-api-access-cjdr9\") pod \"horizon-operator-controller-manager-5d86b44686-4svhq\" (UID: \"5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.661486 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk4d8\" (UniqueName: \"kubernetes.io/projected/606a5459-832e-4986-a171-4fd89e3ee1ec-kube-api-access-lk4d8\") pod \"infra-operator-controller-manager-769d9c7585-z7ftj\" (UID: \"606a5459-832e-4986-a171-4fd89e3ee1ec\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.661993 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58s5c\" (UniqueName: \"kubernetes.io/projected/2652d83c-0fb2-41a7-a372-2f8e48ea33cc-kube-api-access-58s5c\") pod \"ironic-operator-controller-manager-5c75d7c94b-ngxgx\" (UID: \"2652d83c-0fb2-41a7-a372-2f8e48ea33cc\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.666302 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.682222 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.696683 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.706744 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.708316 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.710741 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2ddk\" (UniqueName: \"kubernetes.io/projected/a60dc80f-2382-4901-a79e-1468759d9281-kube-api-access-p2ddk\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-kdw5m\" (UID: \"a60dc80f-2382-4901-a79e-1468759d9281\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.710829 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgg4s\" (UniqueName: \"kubernetes.io/projected/9e55dcae-85ee-412f-aa9b-3fc5a061d595-kube-api-access-jgg4s\") pod \"octavia-operator-controller-manager-6fdc856c5d-8kwlf\" (UID: \"9e55dcae-85ee-412f-aa9b-3fc5a061d595\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.710880 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pw9s\" (UniqueName: \"kubernetes.io/projected/4b01f462-8bc8-4f01-ac0c-76452c353177-kube-api-access-4pw9s\") pod \"manila-operator-controller-manager-7bb88cb858-4ffrf\" (UID: \"4b01f462-8bc8-4f01-ac0c-76452c353177\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.710960 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6fk\" (UniqueName: \"kubernetes.io/projected/39e1c56a-84c3-4f33-a16d-77c62d65cd0f-kube-api-access-mv6fk\") pod \"neutron-operator-controller-manager-66b7d6f598-g2cfx\" (UID: \"39e1c56a-84c3-4f33-a16d-77c62d65cd0f\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.711007 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mttp5\" (UniqueName: \"kubernetes.io/projected/8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8-kube-api-access-mttp5\") pod \"nova-operator-controller-manager-86d796d84d-2m7pb\" (UID: \"8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.711067 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5jr4\" (UniqueName: \"kubernetes.io/projected/37344a1b-ea4d-4dcf-a803-3811a5626106-kube-api-access-v5jr4\") pod \"keystone-operator-controller-manager-7879fb76fd-d9wbt\" (UID: \"37344a1b-ea4d-4dcf-a803-3811a5626106\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.711149 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.711918 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pjskk" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.769395 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pw9s\" (UniqueName: \"kubernetes.io/projected/4b01f462-8bc8-4f01-ac0c-76452c353177-kube-api-access-4pw9s\") pod \"manila-operator-controller-manager-7bb88cb858-4ffrf\" (UID: \"4b01f462-8bc8-4f01-ac0c-76452c353177\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.771281 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2ddk\" (UniqueName: \"kubernetes.io/projected/a60dc80f-2382-4901-a79e-1468759d9281-kube-api-access-p2ddk\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-kdw5m\" (UID: \"a60dc80f-2382-4901-a79e-1468759d9281\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.771635 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5jr4\" (UniqueName: \"kubernetes.io/projected/37344a1b-ea4d-4dcf-a803-3811a5626106-kube-api-access-v5jr4\") pod \"keystone-operator-controller-manager-7879fb76fd-d9wbt\" (UID: \"37344a1b-ea4d-4dcf-a803-3811a5626106\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.786685 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.788319 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.788422 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.789420 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.794329 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zjqrj" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.796203 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.797462 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.800739 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.801516 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.801556 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-hh7vc" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.812142 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgg4s\" (UniqueName: \"kubernetes.io/projected/9e55dcae-85ee-412f-aa9b-3fc5a061d595-kube-api-access-jgg4s\") pod \"octavia-operator-controller-manager-6fdc856c5d-8kwlf\" (UID: \"9e55dcae-85ee-412f-aa9b-3fc5a061d595\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.812206 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6t96\" (UniqueName: \"kubernetes.io/projected/25cf6a11-4150-4091-a6b8-d7510c5ca5ac-kube-api-access-v6t96\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-g62mm\" (UID: \"25cf6a11-4150-4091-a6b8-d7510c5ca5ac\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.812236 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq\" (UID: \"f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.812300 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mttp5\" (UniqueName: \"kubernetes.io/projected/8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8-kube-api-access-mttp5\") pod \"nova-operator-controller-manager-86d796d84d-2m7pb\" (UID: \"8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.812329 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpm6f\" (UniqueName: \"kubernetes.io/projected/f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9-kube-api-access-fpm6f\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq\" (UID: \"f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.831361 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6fk\" (UniqueName: \"kubernetes.io/projected/39e1c56a-84c3-4f33-a16d-77c62d65cd0f-kube-api-access-mv6fk\") pod \"neutron-operator-controller-manager-66b7d6f598-g2cfx\" (UID: \"39e1c56a-84c3-4f33-a16d-77c62d65cd0f\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.846635 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.877407 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgg4s\" (UniqueName: \"kubernetes.io/projected/9e55dcae-85ee-412f-aa9b-3fc5a061d595-kube-api-access-jgg4s\") pod \"octavia-operator-controller-manager-6fdc856c5d-8kwlf\" (UID: \"9e55dcae-85ee-412f-aa9b-3fc5a061d595\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.888386 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mttp5\" (UniqueName: \"kubernetes.io/projected/8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8-kube-api-access-mttp5\") pod \"nova-operator-controller-manager-86d796d84d-2m7pb\" (UID: \"8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.900626 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.902076 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.913498 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.914182 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-v5srz" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.915366 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2j4m\" (UniqueName: \"kubernetes.io/projected/dbb47a0b-1e01-47b7-b57f-20e2e908674e-kube-api-access-s2j4m\") pod \"placement-operator-controller-manager-6dc664666c-qvfs7\" (UID: \"dbb47a0b-1e01-47b7-b57f-20e2e908674e\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.915439 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6t96\" (UniqueName: \"kubernetes.io/projected/25cf6a11-4150-4091-a6b8-d7510c5ca5ac-kube-api-access-v6t96\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-g62mm\" (UID: \"25cf6a11-4150-4091-a6b8-d7510c5ca5ac\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.915583 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq\" (UID: \"f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.915664 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpm6f\" (UniqueName: \"kubernetes.io/projected/f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9-kube-api-access-fpm6f\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq\" (UID: \"f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.915695 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdd6d\" (UniqueName: \"kubernetes.io/projected/6de96fac-ce97-4bec-a2af-f50f839454ea-kube-api-access-sdd6d\") pod \"swift-operator-controller-manager-799cb6ffd6-kzwpc\" (UID: \"6de96fac-ce97-4bec-a2af-f50f839454ea\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" Nov 24 12:13:36 crc kubenswrapper[4930]: E1124 12:13:36.916036 4930 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:13:36 crc kubenswrapper[4930]: E1124 12:13:36.916085 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9-cert podName:f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9 nodeName:}" failed. No retries permitted until 2025-11-24 12:13:37.41606643 +0000 UTC m=+864.030394390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" (UID: "f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.961918 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.973419 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.979312 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb"] Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.981337 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6t96\" (UniqueName: \"kubernetes.io/projected/25cf6a11-4150-4091-a6b8-d7510c5ca5ac-kube-api-access-v6t96\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-g62mm\" (UID: \"25cf6a11-4150-4091-a6b8-d7510c5ca5ac\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" Nov 24 12:13:36 crc kubenswrapper[4930]: I1124 12:13:36.999393 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpm6f\" (UniqueName: \"kubernetes.io/projected/f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9-kube-api-access-fpm6f\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq\" (UID: \"f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.016789 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdd6d\" (UniqueName: \"kubernetes.io/projected/6de96fac-ce97-4bec-a2af-f50f839454ea-kube-api-access-sdd6d\") pod \"swift-operator-controller-manager-799cb6ffd6-kzwpc\" (UID: \"6de96fac-ce97-4bec-a2af-f50f839454ea\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.016847 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2j4m\" (UniqueName: \"kubernetes.io/projected/dbb47a0b-1e01-47b7-b57f-20e2e908674e-kube-api-access-s2j4m\") pod \"placement-operator-controller-manager-6dc664666c-qvfs7\" (UID: \"dbb47a0b-1e01-47b7-b57f-20e2e908674e\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.016882 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tspjf\" (UniqueName: \"kubernetes.io/projected/f7031ec9-a046-4f1f-93e0-a6da41013d68-kube-api-access-tspjf\") pod \"telemetry-operator-controller-manager-7798859c74-27cpb\" (UID: \"f7031ec9-a046-4f1f-93e0-a6da41013d68\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.024808 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m"] Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.032149 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m"] Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.032279 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.037998 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-w8v5j" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.038785 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.042052 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.051643 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdd6d\" (UniqueName: \"kubernetes.io/projected/6de96fac-ce97-4bec-a2af-f50f839454ea-kube-api-access-sdd6d\") pod \"swift-operator-controller-manager-799cb6ffd6-kzwpc\" (UID: \"6de96fac-ce97-4bec-a2af-f50f839454ea\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.082139 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2j4m\" (UniqueName: \"kubernetes.io/projected/dbb47a0b-1e01-47b7-b57f-20e2e908674e-kube-api-access-s2j4m\") pod \"placement-operator-controller-manager-6dc664666c-qvfs7\" (UID: \"dbb47a0b-1e01-47b7-b57f-20e2e908674e\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.094355 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j"] Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.099292 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.106426 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rjc5l" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.112467 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j"] Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.116334 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.118545 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tspjf\" (UniqueName: \"kubernetes.io/projected/f7031ec9-a046-4f1f-93e0-a6da41013d68-kube-api-access-tspjf\") pod \"telemetry-operator-controller-manager-7798859c74-27cpb\" (UID: \"f7031ec9-a046-4f1f-93e0-a6da41013d68\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.151829 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tspjf\" (UniqueName: \"kubernetes.io/projected/f7031ec9-a046-4f1f-93e0-a6da41013d68-kube-api-access-tspjf\") pod \"telemetry-operator-controller-manager-7798859c74-27cpb\" (UID: \"f7031ec9-a046-4f1f-93e0-a6da41013d68\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.191047 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.197938 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99"] Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.199282 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.208359 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-btmc8" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.208751 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.219436 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5kzv\" (UniqueName: \"kubernetes.io/projected/21e42885-6ebc-4b29-a2d1-32f64e257e11-kube-api-access-t5kzv\") pod \"watcher-operator-controller-manager-7cd4fb6f79-2zd5j\" (UID: \"21e42885-6ebc-4b29-a2d1-32f64e257e11\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.219668 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wfvt\" (UniqueName: \"kubernetes.io/projected/6db937f0-a6f1-44e0-87b8-cd4e2d645e24-kube-api-access-4wfvt\") pod \"test-operator-controller-manager-8464cf66df-f2q9m\" (UID: \"6db937f0-a6f1-44e0-87b8-cd4e2d645e24\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.219759 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99"] Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.249520 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6"] Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.250790 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.255973 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bw5vk" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.261837 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.270581 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6"] Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.325057 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wfvt\" (UniqueName: \"kubernetes.io/projected/6db937f0-a6f1-44e0-87b8-cd4e2d645e24-kube-api-access-4wfvt\") pod \"test-operator-controller-manager-8464cf66df-f2q9m\" (UID: \"6db937f0-a6f1-44e0-87b8-cd4e2d645e24\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.325115 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm7nq\" (UniqueName: \"kubernetes.io/projected/bd00a0b4-94c5-4ce5-b162-65c27e70c254-kube-api-access-lm7nq\") pod \"openstack-operator-controller-manager-6cb9dc54f8-67b99\" (UID: \"bd00a0b4-94c5-4ce5-b162-65c27e70c254\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.325137 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5kzv\" (UniqueName: \"kubernetes.io/projected/21e42885-6ebc-4b29-a2d1-32f64e257e11-kube-api-access-t5kzv\") pod \"watcher-operator-controller-manager-7cd4fb6f79-2zd5j\" (UID: \"21e42885-6ebc-4b29-a2d1-32f64e257e11\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.325170 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd00a0b4-94c5-4ce5-b162-65c27e70c254-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-67b99\" (UID: \"bd00a0b4-94c5-4ce5-b162-65c27e70c254\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.334502 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.362071 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5kzv\" (UniqueName: \"kubernetes.io/projected/21e42885-6ebc-4b29-a2d1-32f64e257e11-kube-api-access-t5kzv\") pod \"watcher-operator-controller-manager-7cd4fb6f79-2zd5j\" (UID: \"21e42885-6ebc-4b29-a2d1-32f64e257e11\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.367312 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wfvt\" (UniqueName: \"kubernetes.io/projected/6db937f0-a6f1-44e0-87b8-cd4e2d645e24-kube-api-access-4wfvt\") pod \"test-operator-controller-manager-8464cf66df-f2q9m\" (UID: \"6db937f0-a6f1-44e0-87b8-cd4e2d645e24\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.374598 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w"] Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.409303 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.426310 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq\" (UID: \"f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.426409 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm7nq\" (UniqueName: \"kubernetes.io/projected/bd00a0b4-94c5-4ce5-b162-65c27e70c254-kube-api-access-lm7nq\") pod \"openstack-operator-controller-manager-6cb9dc54f8-67b99\" (UID: \"bd00a0b4-94c5-4ce5-b162-65c27e70c254\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.426446 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd00a0b4-94c5-4ce5-b162-65c27e70c254-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-67b99\" (UID: \"bd00a0b4-94c5-4ce5-b162-65c27e70c254\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.426468 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nthr\" (UniqueName: \"kubernetes.io/projected/83d079ef-a30c-458e-a350-c6f6d9a8985f-kube-api-access-8nthr\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6\" (UID: \"83d079ef-a30c-458e-a350-c6f6d9a8985f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6" Nov 24 12:13:37 crc kubenswrapper[4930]: E1124 12:13:37.426893 4930 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 12:13:37 crc kubenswrapper[4930]: E1124 12:13:37.426940 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd00a0b4-94c5-4ce5-b162-65c27e70c254-cert podName:bd00a0b4-94c5-4ce5-b162-65c27e70c254 nodeName:}" failed. No retries permitted until 2025-11-24 12:13:37.926926393 +0000 UTC m=+864.541254343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd00a0b4-94c5-4ce5-b162-65c27e70c254-cert") pod "openstack-operator-controller-manager-6cb9dc54f8-67b99" (UID: "bd00a0b4-94c5-4ce5-b162-65c27e70c254") : secret "webhook-server-cert" not found Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.430784 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq\" (UID: \"f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.444262 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm7nq\" (UniqueName: \"kubernetes.io/projected/bd00a0b4-94c5-4ce5-b162-65c27e70c254-kube-api-access-lm7nq\") pod \"openstack-operator-controller-manager-6cb9dc54f8-67b99\" (UID: \"bd00a0b4-94c5-4ce5-b162-65c27e70c254\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.457265 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.527368 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nthr\" (UniqueName: \"kubernetes.io/projected/83d079ef-a30c-458e-a350-c6f6d9a8985f-kube-api-access-8nthr\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6\" (UID: \"83d079ef-a30c-458e-a350-c6f6d9a8985f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.548246 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nthr\" (UniqueName: \"kubernetes.io/projected/83d079ef-a30c-458e-a350-c6f6d9a8985f-kube-api-access-8nthr\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6\" (UID: \"83d079ef-a30c-458e-a350-c6f6d9a8985f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.560373 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd"] Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.597710 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.686767 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.935322 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd00a0b4-94c5-4ce5-b162-65c27e70c254-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-67b99\" (UID: \"bd00a0b4-94c5-4ce5-b162-65c27e70c254\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:37 crc kubenswrapper[4930]: I1124 12:13:37.943420 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd00a0b4-94c5-4ce5-b162-65c27e70c254-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-67b99\" (UID: \"bd00a0b4-94c5-4ce5-b162-65c27e70c254\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.071957 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.078979 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4"] Nov 24 12:13:38 crc kubenswrapper[4930]: W1124 12:13:38.083288 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96a9f3c5_4eaa_4265_9c3e_f0c54dd0df0b.slice/crio-fb90dfa347c29eef0c1a4deca55ec8920f635aaf64ed35ae1f3cf74724f15dc4 WatchSource:0}: Error finding container fb90dfa347c29eef0c1a4deca55ec8920f635aaf64ed35ae1f3cf74724f15dc4: Status 404 returned error can't find the container with id fb90dfa347c29eef0c1a4deca55ec8920f635aaf64ed35ae1f3cf74724f15dc4 Nov 24 12:13:38 crc kubenswrapper[4930]: W1124 12:13:38.083722 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0752fe04_d0ea_4225_8e86_62c70618a5a1.slice/crio-8146fc48b1e9c77be04921266c40bbbdbf37a8c28828c21b501636ec8cbf1484 WatchSource:0}: Error finding container 8146fc48b1e9c77be04921266c40bbbdbf37a8c28828c21b501636ec8cbf1484: Status 404 returned error can't find the container with id 8146fc48b1e9c77be04921266c40bbbdbf37a8c28828c21b501636ec8cbf1484 Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.097153 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5"] Nov 24 12:13:38 crc kubenswrapper[4930]: W1124 12:13:38.099478 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod525584f5_a41b_4189_986d_32f6c4e6bc16.slice/crio-0be054673b51524b82639a0a2ba5c247b0b3449ddb1c052d023967f2d58e3f5a WatchSource:0}: Error finding container 0be054673b51524b82639a0a2ba5c247b0b3449ddb1c052d023967f2d58e3f5a: Status 404 returned error can't find the container with id 0be054673b51524b82639a0a2ba5c247b0b3449ddb1c052d023967f2d58e3f5a Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.174823 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.185600 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" event={"ID":"96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b","Type":"ContainerStarted","Data":"fb90dfa347c29eef0c1a4deca55ec8920f635aaf64ed35ae1f3cf74724f15dc4"} Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.187447 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" event={"ID":"2115a6ba-c1ea-45f6-a340-7ccd67a77bbd","Type":"ContainerStarted","Data":"cd0da17bdf3e5eb0093969b1640f3066675d725a1c82dbf529e1d9b870091553"} Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.189551 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" event={"ID":"cf778eca-e1fc-4619-9a85-aeda0fac014b","Type":"ContainerStarted","Data":"ba4118993157dc4b15b03501ab7c69b70d2783e2aa222179bfd3c7656b346520"} Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.190631 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" event={"ID":"0752fe04-d0ea-4225-8e86-62c70618a5a1","Type":"ContainerStarted","Data":"8146fc48b1e9c77be04921266c40bbbdbf37a8c28828c21b501636ec8cbf1484"} Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.191672 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" event={"ID":"525584f5-a41b-4189-986d-32f6c4e6bc16","Type":"ContainerStarted","Data":"0be054673b51524b82639a0a2ba5c247b0b3449ddb1c052d023967f2d58e3f5a"} Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.417296 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.429697 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx"] Nov 24 12:13:38 crc kubenswrapper[4930]: W1124 12:13:38.432997 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b01f462_8bc8_4f01_ac0c_76452c353177.slice/crio-ac305259e52f5b374c53663fd0439f0a82d018dd10900933943aa27070a26dc2 WatchSource:0}: Error finding container ac305259e52f5b374c53663fd0439f0a82d018dd10900933943aa27070a26dc2: Status 404 returned error can't find the container with id ac305259e52f5b374c53663fd0439f0a82d018dd10900933943aa27070a26dc2 Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.447116 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.453853 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.469768 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.480258 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.492935 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.500186 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.512179 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.517066 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt"] Nov 24 12:13:38 crc kubenswrapper[4930]: E1124 12:13:38.528487 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5jr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7879fb76fd-d9wbt_openstack-operators(37344a1b-ea4d-4dcf-a803-3811a5626106): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:13:38 crc kubenswrapper[4930]: E1124 12:13:38.707681 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" podUID="37344a1b-ea4d-4dcf-a803-3811a5626106" Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.799995 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.811449 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.826112 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.834934 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.842789 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.847996 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.851575 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m"] Nov 24 12:13:38 crc kubenswrapper[4930]: I1124 12:13:38.866672 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99"] Nov 24 12:13:38 crc kubenswrapper[4930]: E1124 12:13:38.867462 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t5kzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7cd4fb6f79-2zd5j_openstack-operators(21e42885-6ebc-4b29-a2d1-32f64e257e11): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:13:38 crc kubenswrapper[4930]: E1124 12:13:38.869500 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sdd6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-799cb6ffd6-kzwpc_openstack-operators(6de96fac-ce97-4bec-a2af-f50f839454ea): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:13:38 crc kubenswrapper[4930]: E1124 12:13:38.875959 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:7dbadf7b98f2f305f9f1382f55a084c8ca404f4263f76b28e56bd0dc437e2192,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:0473ff9eec0da231e2d0a10bf1abbe1dfa1a0f95b8f619e3a07605386951449a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:c8101c77a82eae4407e41e1fd766dfc6e1b7f9ed1679e3efb6f91ff97a1557b2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:eb9743b21bbadca6f7cb9ac4fc46b5d58c51c674073c7e1121f4474a71304071,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:3d81f839b98c2e2a5bf0da79f2f9a92dff7d0a3c5a830b0e95c89dad8cf98a6a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:d19ac99249b47dd8ea16cd6aaa5756346aa8a2f119ee50819c15c5366efb417d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:8536169e5537fe6c330eba814248abdcf39cdd8f7e7336034d74e6fda9544050,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:4f1fa337760e82bfd67cdd142a97c121146dd7e621daac161940dd5e4ddb80dc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:3613b345d5baed98effd906f8b0242d863e14c97078ea473ef01fe1b0afc46f3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:9f9f367ed4c85efb16c3a74a4bb707ff0db271d7bc5abc70a71e984b55f43003,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:b73ad22b4955b06d584bce81742556d8c0c7828c495494f8ea7c99391c61b70f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:aa1d3aaf6b394621ed4089a98e0a82b763f467e8b5c5db772f9fdf99fc86e333,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:d6661053141b6df421288a7c9968a155ab82e478c1d75ab41f2cebe2f0ca02d2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:ce2d63258cb4e7d0d1c07234de6889c5434464190906798019311a1c7cf6387f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:0485ef9e5b4437f7cd2ba54034a87722ce4669ee86b3773c6b0c037ed8000e91,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:962c004551d0503779364b767b9bf0cecdf78dbba8809b2ca8b073f58e1f4e5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:0ebf4c465fb6cc7dad9e6cb2da0ff54874c9acbcb40d62234a629ec2c12cdd62,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:ff0c553ceeb2e0f44b010e37dc6d0db8a251797b88e56468b7cf7f05253e4232,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:624f553f073af7493d34828b074adc9981cce403edd8e71482c7307008479fd9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:e3874936a518c8560339db8f840fc5461885819f6050b5de8d3ab9199bea5094,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:1cea25f1d2a45affc80c46fb9d427749d3f06b61590ac6070a2910e3ec8a4e5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:e36d5b9a65194f12f7b01c6422ba3ed52a687fd1695fbb21f4986c67d9f9317f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:8b21bec527d54cd766e277889df6bcccd2baeaa946274606b986c0c3b7ca689f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:45aceca77f8fcf61127f0da650bdfdf11ede9b0944c78b63fab819d03283f96b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:709ac58998927dd61786821ae1e63343fd97ccf5763aac5edb4583eea9401d22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:867d4ef7c21f75e6030a685b5762ab4d84b671316ed6b98d75200076e93342cd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:2b90da93550b99d2fcfa95bd819f3363aa68346a416f8dc7baac3e9c5f487761,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:8cde52cef8795d1c91983b100d86541c7718160ec260fe0f97b96add4c2c8ee8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:835ebed082fe1c45bd799d1d5357595ce63efeb05ca876f26b08443facb9c164,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:011d682241db724bc40736c9b54d2ea450ea7e6be095b1ff5fa28c8007466775,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:2025da90cff8f563deb08bee71efe16d4078edc2a767b2e225cca5c77f1aa2f9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:ff46cd5e0e13d105c4629e78c2734a50835f06b6a1e31da9e0462981d10c4be3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:5b4fd0c2b76fa5539f74687b11c5882d77bd31352452322b37ff51fa18f12a61,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:5e03376bd895346dc8f627ca15ded942526ed8b5e92872f453ce272e694d18d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:36a0fb31978aee0ded2483de311631e64a644d0b0685b5b055f65ede7eb8e8a2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:5f6045841aff0fde6f684a34cdf49f8dc7b2c3bcbdeab201f1058971e0c5f79e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:448f4e1b740c30936e340bd6e8534d78c83357bf373a4223950aa64d3484f007,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:b68e3615af8a0eb0ef6bf9ceeef59540a6f4a9a85f6078a3620be115c73a7db8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:7eae01cf60383e523c9cd94d158a9162120a7370829a1dad20fdea6b0fd660bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:28cc10501788081eb61b5a1af35546191a92741f4f109df54c74e2b19439d0f9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:9a616e37acfd120612f78043237a8541266ba34883833c9beb43f3da313661ad,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:6b1be6cd94a0942259bca5d5d2c30cc7de4a33276b61f8ae3940226772106256,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:02d2c22d15401574941fbe057095442dee0d6f7a0a9341de35d25e6a12a3fe4b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:fc3b3a36b74fd653946723c54b208072d52200635850b531e9d595a7aaea5a01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:7850ccbff320bf9a1c9c769c1c70777eb97117dd8cd5ae4435be9b4622cf807a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:397dac7e39cf40d14a986e6ec4a60fb698ca35c197d0db315b1318514cc6d1d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:1c95142a36276686e720f86423ee171dc9adcc1e89879f627545b7c906ccd9bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:e331a8fde6638e5ba154c4f0b38772a9a424f60656f2777245975fb1fa02f07d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:cd3cf7a34053e850b4d4f9f4ea4c74953a54a42fd18e47d7c01d44a88923e925,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:aee28476344fc0cc148fbe97daf9b1bfcedc22001550bba4bdc4e84be7b6989d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:cfa0b92c976603ee2a937d34013a238fcd8aa75f998e50642e33489f14124633,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:73c2f2d6eecf88acf4e45b133c8373d9bb006b530e0aff0b28f3b7420620a874,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:927b405cc04abe5ff716186e8d35e2dc5fad1c8430194659ee6617d74e4e055d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:6154d7cebd7c339afa5b86330262156171743aa5b79c2b78f9a2f378005ed8fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:e2db2f4af8d3d0be7868c6efef0189f3a2c74a8f96ae10e3f991cdf83feaef29,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:c773629df257726a6d3cacc24a6e4df0babcd7d37df04e6d14676a8da028b9c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:776211111e2e6493706dbc49a3ba44f31d1b947919313ed3a0f35810e304ec52,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:0a98e8f5c83522ca6c8e40c5e9561f6628d2d5e69f0e8a64279c541c989d3d8b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:7cccf24ad0a152f90ca39893064f48a1656950ee8142685a5d482c71f0bdc9f5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:05450b48f6b5352b2686a26e933e8727748edae2ae9652d9164b7d7a1817c55a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:fc9c99eeef91523482bd8f92661b393287e1f2a24ad2ba9e33191f8de9af74cf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:3e4ecc02b4b5e0860482a93599ba9ca598c5ce26c093c46e701f96fe51acb208,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:2346037e064861c7892690d2e8b3e1eea1a26ce3c3a11fda0b41301965bc828c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:c26c3ff9cabe3593ceb10006e782bf9391ac14785768ce9eec4f938c2d3cf228,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:daa45220bb1c47922d0917aa8fe423bb82b03a01429f1c9e37635e701e352d71,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:a80a074e227d3238bb6f285788a9e886ae7a5909ccbc5c19c93c369bdfe5b3b8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:58ac66ca1be01fe0157977bd79a26cde4d0de153edfaf4162367c924826b2ef4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:99a63770d80cc7c3afa1118b400972fb0e6bff5284a2eae781b12582ad79c29c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:9ee4d84529394afcd860f1a1186484560f02f08c15c37cac42a22473b7116d5f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:ea15fadda7b0439ec637edfaf6ea5dbf3e35fb3be012c7c5a31e722c90becb11,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fpm6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq_openstack-operators(f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:13:38 crc kubenswrapper[4930]: E1124 12:13:38.880721 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tspjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7798859c74-27cpb_openstack-operators(f7031ec9-a046-4f1f-93e0-a6da41013d68): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:13:38 crc kubenswrapper[4930]: E1124 12:13:38.884400 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4wfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8464cf66df-f2q9m_openstack-operators(6db937f0-a6f1-44e0-87b8-cd4e2d645e24): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:13:38 crc kubenswrapper[4930]: W1124 12:13:38.900967 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbb47a0b_1e01_47b7_b57f_20e2e908674e.slice/crio-c78c0fd1b80977abe7d7bb0b925cac172b43c63e18057d0d2d4efbbfe21c7a54 WatchSource:0}: Error finding container c78c0fd1b80977abe7d7bb0b925cac172b43c63e18057d0d2d4efbbfe21c7a54: Status 404 returned error can't find the container with id c78c0fd1b80977abe7d7bb0b925cac172b43c63e18057d0d2d4efbbfe21c7a54 Nov 24 12:13:39 crc kubenswrapper[4930]: E1124 12:13:39.096493 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" podUID="6de96fac-ce97-4bec-a2af-f50f839454ea" Nov 24 12:13:39 crc kubenswrapper[4930]: E1124 12:13:39.156751 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" podUID="21e42885-6ebc-4b29-a2d1-32f64e257e11" Nov 24 12:13:39 crc kubenswrapper[4930]: E1124 12:13:39.202723 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" podUID="f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9" Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.214752 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6" event={"ID":"83d079ef-a30c-458e-a350-c6f6d9a8985f","Type":"ContainerStarted","Data":"2cecf48f3d830bc92817ee554a2d60eff8e3751dd2838aa9c519d04e5a776ca5"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.215823 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" event={"ID":"39e1c56a-84c3-4f33-a16d-77c62d65cd0f","Type":"ContainerStarted","Data":"215d69c733baed43df16cc699e66c954ac74858d760b0de7ae87d78d3eebc7d1"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.218406 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" event={"ID":"bd00a0b4-94c5-4ce5-b162-65c27e70c254","Type":"ContainerStarted","Data":"01d14757ff34ecde7476e4387097550a0b05e2098b590e441a2ce053178f5601"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.221151 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" event={"ID":"37344a1b-ea4d-4dcf-a803-3811a5626106","Type":"ContainerStarted","Data":"fd6c0013f8078f9882e7ef2b7806288a20aa0c6f5f8795ac0154bccb209d2a64"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.221183 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" event={"ID":"37344a1b-ea4d-4dcf-a803-3811a5626106","Type":"ContainerStarted","Data":"70c9bdf222b5cecf4eefbf7c09196ecbe5afaec11667a2968c4511efafa137a5"} Nov 24 12:13:39 crc kubenswrapper[4930]: E1124 12:13:39.223265 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" podUID="37344a1b-ea4d-4dcf-a803-3811a5626106" Nov 24 12:13:39 crc kubenswrapper[4930]: E1124 12:13:39.229909 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" podUID="f7031ec9-a046-4f1f-93e0-a6da41013d68" Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.230462 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" event={"ID":"6db937f0-a6f1-44e0-87b8-cd4e2d645e24","Type":"ContainerStarted","Data":"f0d1f3468c936a2841fe0a8368f0e1484bec31055fd8a656b6e796238e2aa4e3"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.232413 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" event={"ID":"a60dc80f-2382-4901-a79e-1468759d9281","Type":"ContainerStarted","Data":"53dac9737f77d2e6899a94255d2a5d7b6a70f2c5acbfee346eb7b0d0b4147dc2"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.235209 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" event={"ID":"dbb47a0b-1e01-47b7-b57f-20e2e908674e","Type":"ContainerStarted","Data":"c78c0fd1b80977abe7d7bb0b925cac172b43c63e18057d0d2d4efbbfe21c7a54"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.243910 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" event={"ID":"f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9","Type":"ContainerStarted","Data":"5d1d9c2dcc5270766b66cdd61c5650ab9ff12a337a3667bf8633669818291146"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.243964 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" event={"ID":"f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9","Type":"ContainerStarted","Data":"7410620b869b457abe8a613070927177831072793d0a9cfa8adc1aedb9fe42e1"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.244907 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" event={"ID":"2652d83c-0fb2-41a7-a372-2f8e48ea33cc","Type":"ContainerStarted","Data":"b21048e7fd9fe194895f57fbe33bfa3f81a5a1b7e26dd1ae7f52a3ae5400fc2f"} Nov 24 12:13:39 crc kubenswrapper[4930]: E1124 12:13:39.245581 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" podUID="f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9" Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.246856 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" event={"ID":"9e55dcae-85ee-412f-aa9b-3fc5a061d595","Type":"ContainerStarted","Data":"3519e7a8e0f3a7637c7bd56d1e34b30d04e964441b2f25bdf63b12acbc898f0c"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.257371 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" event={"ID":"6de96fac-ce97-4bec-a2af-f50f839454ea","Type":"ContainerStarted","Data":"fe6aa504d66cef8294f2b74ce9d7d238245d503132d128162501373d3115aae4"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.257423 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" event={"ID":"6de96fac-ce97-4bec-a2af-f50f839454ea","Type":"ContainerStarted","Data":"137b7bf26191dbcd594234e22f77f71753a53395c53e07ba47b2bc4aa3d0f662"} Nov 24 12:13:39 crc kubenswrapper[4930]: E1124 12:13:39.259070 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" podUID="6de96fac-ce97-4bec-a2af-f50f839454ea" Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.259476 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" event={"ID":"5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e","Type":"ContainerStarted","Data":"ba687c9bd726e3043ed4984cf805d83d707ae87aa86ad8364d4fe583606b0bac"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.264434 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" event={"ID":"8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8","Type":"ContainerStarted","Data":"0e4a39cbf1448e9f366e2bc0c0f23f70883b4306b05e8dbbd2a2f799170c4362"} Nov 24 12:13:39 crc kubenswrapper[4930]: E1124 12:13:39.264489 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" podUID="6db937f0-a6f1-44e0-87b8-cd4e2d645e24" Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.267444 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" event={"ID":"606a5459-832e-4986-a171-4fd89e3ee1ec","Type":"ContainerStarted","Data":"51105f23f658ffae572a69672e1dc3e73b2c22ed5a554f4c0863934f777d125f"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.268663 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" event={"ID":"4b01f462-8bc8-4f01-ac0c-76452c353177","Type":"ContainerStarted","Data":"ac305259e52f5b374c53663fd0439f0a82d018dd10900933943aa27070a26dc2"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.272507 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" event={"ID":"f7031ec9-a046-4f1f-93e0-a6da41013d68","Type":"ContainerStarted","Data":"9fca05c58a2fcdd9cf2c40ecd62742ab9a018cfd0b481b1b1fdd7da68e35116e"} Nov 24 12:13:39 crc kubenswrapper[4930]: E1124 12:13:39.273979 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" podUID="f7031ec9-a046-4f1f-93e0-a6da41013d68" Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.274373 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" event={"ID":"21e42885-6ebc-4b29-a2d1-32f64e257e11","Type":"ContainerStarted","Data":"82ecc67e47fd157523d9c891c22592017aa1fea9f20031fac5956a5ce7aaf6fd"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.274409 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" event={"ID":"21e42885-6ebc-4b29-a2d1-32f64e257e11","Type":"ContainerStarted","Data":"1a5973a591520e01647ffcc9acdc13c473ad4b2862e5c2631c9b3484526daab6"} Nov 24 12:13:39 crc kubenswrapper[4930]: I1124 12:13:39.278060 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" event={"ID":"25cf6a11-4150-4091-a6b8-d7510c5ca5ac","Type":"ContainerStarted","Data":"38940612113ca086284ccc02b154ad99001f0e7f0ef060fb9af775e4ff12acdb"} Nov 24 12:13:39 crc kubenswrapper[4930]: E1124 12:13:39.281709 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" podUID="21e42885-6ebc-4b29-a2d1-32f64e257e11" Nov 24 12:13:40 crc kubenswrapper[4930]: I1124 12:13:40.297786 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" event={"ID":"6db937f0-a6f1-44e0-87b8-cd4e2d645e24","Type":"ContainerStarted","Data":"f5c5aa2c8c64b27119ecbc687177a47770000dc63dd90bbe4b6335b1b2849fba"} Nov 24 12:13:40 crc kubenswrapper[4930]: E1124 12:13:40.300853 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" podUID="6db937f0-a6f1-44e0-87b8-cd4e2d645e24" Nov 24 12:13:40 crc kubenswrapper[4930]: I1124 12:13:40.302620 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" event={"ID":"f7031ec9-a046-4f1f-93e0-a6da41013d68","Type":"ContainerStarted","Data":"302c05580400496d4574f399cff3cd9f482c98c1952dd232987cf19274759ea2"} Nov 24 12:13:40 crc kubenswrapper[4930]: E1124 12:13:40.306763 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" podUID="f7031ec9-a046-4f1f-93e0-a6da41013d68" Nov 24 12:13:40 crc kubenswrapper[4930]: I1124 12:13:40.313730 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" event={"ID":"bd00a0b4-94c5-4ce5-b162-65c27e70c254","Type":"ContainerStarted","Data":"69153171d0e7bf54960609295ece30c4ecb8e8f8e871fd839e29f9e0c1374fc5"} Nov 24 12:13:40 crc kubenswrapper[4930]: I1124 12:13:40.313795 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" event={"ID":"bd00a0b4-94c5-4ce5-b162-65c27e70c254","Type":"ContainerStarted","Data":"299c3f97848ef9199b5e6ce5041e317b978c8483842c5567ff59edeb01c71f31"} Nov 24 12:13:40 crc kubenswrapper[4930]: I1124 12:13:40.313819 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:40 crc kubenswrapper[4930]: E1124 12:13:40.322156 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" podUID="21e42885-6ebc-4b29-a2d1-32f64e257e11" Nov 24 12:13:40 crc kubenswrapper[4930]: E1124 12:13:40.322158 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" podUID="6de96fac-ce97-4bec-a2af-f50f839454ea" Nov 24 12:13:40 crc kubenswrapper[4930]: E1124 12:13:40.322320 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" podUID="37344a1b-ea4d-4dcf-a803-3811a5626106" Nov 24 12:13:40 crc kubenswrapper[4930]: E1124 12:13:40.325641 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" podUID="f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9" Nov 24 12:13:40 crc kubenswrapper[4930]: I1124 12:13:40.473903 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" podStartSLOduration=3.473885321 podStartE2EDuration="3.473885321s" podCreationTimestamp="2025-11-24 12:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:13:40.47141472 +0000 UTC m=+867.085742670" watchObservedRunningTime="2025-11-24 12:13:40.473885321 +0000 UTC m=+867.088213271" Nov 24 12:13:41 crc kubenswrapper[4930]: E1124 12:13:41.324371 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" podUID="f7031ec9-a046-4f1f-93e0-a6da41013d68" Nov 24 12:13:41 crc kubenswrapper[4930]: E1124 12:13:41.325430 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" podUID="6db937f0-a6f1-44e0-87b8-cd4e2d645e24" Nov 24 12:13:48 crc kubenswrapper[4930]: I1124 12:13:48.182581 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-67b99" Nov 24 12:13:49 crc kubenswrapper[4930]: I1124 12:13:49.391735 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" event={"ID":"606a5459-832e-4986-a171-4fd89e3ee1ec","Type":"ContainerStarted","Data":"68e26b257e7ee99a99e38943b4ee22ba086ff7cad3c5a9956e38b9d416fa22fc"} Nov 24 12:13:49 crc kubenswrapper[4930]: I1124 12:13:49.398083 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" event={"ID":"39e1c56a-84c3-4f33-a16d-77c62d65cd0f","Type":"ContainerStarted","Data":"0c2c4530f28426a45db1aed16d4431a5b3d6a7a30ece19d8784e7650d87f6b85"} Nov 24 12:13:49 crc kubenswrapper[4930]: I1124 12:13:49.400569 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" event={"ID":"4b01f462-8bc8-4f01-ac0c-76452c353177","Type":"ContainerStarted","Data":"fb9428cc08dc301b412622207d6808e0294f09d8f8516e9d5be538b0b54a9a96"} Nov 24 12:13:49 crc kubenswrapper[4930]: I1124 12:13:49.403688 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" event={"ID":"2652d83c-0fb2-41a7-a372-2f8e48ea33cc","Type":"ContainerStarted","Data":"d03803dde70705b5f6b27e8e8116946b7e8836d2faacdeaff713aa49ba487d8f"} Nov 24 12:13:49 crc kubenswrapper[4930]: I1124 12:13:49.405081 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" event={"ID":"cf778eca-e1fc-4619-9a85-aeda0fac014b","Type":"ContainerStarted","Data":"b45ba95594b2dde5286f978cb53c2982cd8dd52549f18d6d2f4c0d63980f864c"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.450605 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" event={"ID":"8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8","Type":"ContainerStarted","Data":"77e7e9ecf42aa8bf294af1e2a624ad65075171f2eab74c5aa250f55da5e9bc68"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.464918 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" event={"ID":"a60dc80f-2382-4901-a79e-1468759d9281","Type":"ContainerStarted","Data":"88cb1145c8dc4da616bcbba8296e9f2d46fb89e801e0a0db912e843ad57d2847"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.478422 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" event={"ID":"4b01f462-8bc8-4f01-ac0c-76452c353177","Type":"ContainerStarted","Data":"5926cc72ccc8d7fdc84cf54244468742ea9e795ed72e115d7018484d467cb963"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.479629 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.514703 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" event={"ID":"525584f5-a41b-4189-986d-32f6c4e6bc16","Type":"ContainerStarted","Data":"6163c13994cebd1ae58a73f3ac516a9057d30353e4db6a7bfc2791eac6b323aa"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.527576 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" event={"ID":"dbb47a0b-1e01-47b7-b57f-20e2e908674e","Type":"ContainerStarted","Data":"064966b9839a262f23d23e01f17f9fec0589f4b655b0ae0668a18621ed06bc60"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.530557 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6" event={"ID":"83d079ef-a30c-458e-a350-c6f6d9a8985f","Type":"ContainerStarted","Data":"2e2532432731660a5ec80562752a9cb8f1047e62ae0f82cf8ece6926bb2de8a0"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.545794 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" event={"ID":"5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e","Type":"ContainerStarted","Data":"c24efde6b8ca791470d5136bbec00ff89cd383056661cf2cd09a4fda67c4f2f3"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.549264 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" event={"ID":"cf778eca-e1fc-4619-9a85-aeda0fac014b","Type":"ContainerStarted","Data":"e71d4304e00716cba7fc169e43604e014804d218b390e66551484a012a739858"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.550754 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.556308 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" event={"ID":"9e55dcae-85ee-412f-aa9b-3fc5a061d595","Type":"ContainerStarted","Data":"a1c75e0f3a2e110db1c061d9c9a00611534425c1856ba1dc6f5b0fd5f6ff949f"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.567681 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" event={"ID":"96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b","Type":"ContainerStarted","Data":"8094b2227f78f62641d38afaeeb8347178fa12ce070a9bd4c0a45e558e29399b"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.573071 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" event={"ID":"2652d83c-0fb2-41a7-a372-2f8e48ea33cc","Type":"ContainerStarted","Data":"f8f4527005e78c7af96bc4150210d68c0b5ccb0f8a4dd4992a6aff4379472cd2"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.573182 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.576999 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" podStartSLOduration=4.195511059 podStartE2EDuration="14.576980088s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.435772385 +0000 UTC m=+865.050100335" lastFinishedPulling="2025-11-24 12:13:48.817241414 +0000 UTC m=+875.431569364" observedRunningTime="2025-11-24 12:13:50.514324412 +0000 UTC m=+877.128652362" watchObservedRunningTime="2025-11-24 12:13:50.576980088 +0000 UTC m=+877.191308038" Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.578656 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" event={"ID":"2115a6ba-c1ea-45f6-a340-7ccd67a77bbd","Type":"ContainerStarted","Data":"cc61087336e343459c77bb84cdc5209619c37562d4017788734142b253e25102"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.587052 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" event={"ID":"25cf6a11-4150-4091-a6b8-d7510c5ca5ac","Type":"ContainerStarted","Data":"8a02e6b50298bed0826263aace4a3e69f516673b2e5fa54b1f787c9c2cdeee89"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.600609 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" event={"ID":"0752fe04-d0ea-4225-8e86-62c70618a5a1","Type":"ContainerStarted","Data":"e4bbd10d655af9f1f634ef7eab6b1d009d8a7686d67f22029994318d82774e8f"} Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.602127 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" podStartSLOduration=3.276131645 podStartE2EDuration="14.602105222s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:37.482933457 +0000 UTC m=+864.097261407" lastFinishedPulling="2025-11-24 12:13:48.808907034 +0000 UTC m=+875.423234984" observedRunningTime="2025-11-24 12:13:50.600744392 +0000 UTC m=+877.215072362" watchObservedRunningTime="2025-11-24 12:13:50.602105222 +0000 UTC m=+877.216433172" Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.609613 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6" podStartSLOduration=3.502353142 podStartE2EDuration="13.609593297s" podCreationTimestamp="2025-11-24 12:13:37 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.867100596 +0000 UTC m=+865.481428546" lastFinishedPulling="2025-11-24 12:13:48.974340751 +0000 UTC m=+875.588668701" observedRunningTime="2025-11-24 12:13:50.566685661 +0000 UTC m=+877.181013611" watchObservedRunningTime="2025-11-24 12:13:50.609593297 +0000 UTC m=+877.223921257" Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.631714 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.658649 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" podStartSLOduration=4.324083034 podStartE2EDuration="14.65860775s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.48764076 +0000 UTC m=+865.101968710" lastFinishedPulling="2025-11-24 12:13:48.822165476 +0000 UTC m=+875.436493426" observedRunningTime="2025-11-24 12:13:50.636158853 +0000 UTC m=+877.250486803" watchObservedRunningTime="2025-11-24 12:13:50.65860775 +0000 UTC m=+877.272935710" Nov 24 12:13:50 crc kubenswrapper[4930]: I1124 12:13:50.684911 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" podStartSLOduration=4.369620996 podStartE2EDuration="14.684886187s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.501072617 +0000 UTC m=+865.115400567" lastFinishedPulling="2025-11-24 12:13:48.816337808 +0000 UTC m=+875.430665758" observedRunningTime="2025-11-24 12:13:50.659233138 +0000 UTC m=+877.273561108" watchObservedRunningTime="2025-11-24 12:13:50.684886187 +0000 UTC m=+877.299214137" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.629151 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" event={"ID":"525584f5-a41b-4189-986d-32f6c4e6bc16","Type":"ContainerStarted","Data":"b64d740131aadbf2a657bc0488ec601fdc2ae01289fc441ee35e2864107ba837"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.629604 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.632680 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" event={"ID":"25cf6a11-4150-4091-a6b8-d7510c5ca5ac","Type":"ContainerStarted","Data":"70b0f8d1c3435ef08c872caaa2f99c4f5752a85453f38c6b6fd77a924511260f"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.633529 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.636246 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" event={"ID":"606a5459-832e-4986-a171-4fd89e3ee1ec","Type":"ContainerStarted","Data":"bf47f7b967cf8e30794ccb6268cbb38dbadf9ddc56dc3d3b90fc97b0327225f5"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.636345 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.638304 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" event={"ID":"5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e","Type":"ContainerStarted","Data":"b804821c11183269ac782ca078b15232941f5f840ce2e73eb493b7771b23bd5a"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.638563 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.640124 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" event={"ID":"0752fe04-d0ea-4225-8e86-62c70618a5a1","Type":"ContainerStarted","Data":"056557d603b4e5b5854455464eabf8daff89329535259aa3ccddb1baebd478c3"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.640232 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.642168 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" event={"ID":"39e1c56a-84c3-4f33-a16d-77c62d65cd0f","Type":"ContainerStarted","Data":"82caf13d484747825290ed7b0c5c59b0b7decab1dc4c51420a6da8690d9c635a"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.644610 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" event={"ID":"9e55dcae-85ee-412f-aa9b-3fc5a061d595","Type":"ContainerStarted","Data":"d3178c12f9de6ca59f1807d5191b0b9b0f01443fdcf2c614b9540cb197640a0d"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.644816 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.648827 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" event={"ID":"8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8","Type":"ContainerStarted","Data":"f07d70bf43374f4553e7fd603a892128b0a05c4a92568165ee96dc4a1708ba7d"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.648893 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.651941 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" event={"ID":"a60dc80f-2382-4901-a79e-1468759d9281","Type":"ContainerStarted","Data":"f6342982f5f03bc9cca96102958f2a5b37c136607d99e00328e96c86caf74519"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.652166 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.652335 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" podStartSLOduration=4.9472067509999995 podStartE2EDuration="15.652311487s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.10352232 +0000 UTC m=+864.717850270" lastFinishedPulling="2025-11-24 12:13:48.808627046 +0000 UTC m=+875.422955006" observedRunningTime="2025-11-24 12:13:51.650715631 +0000 UTC m=+878.265043581" watchObservedRunningTime="2025-11-24 12:13:51.652311487 +0000 UTC m=+878.266639437" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.654370 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" event={"ID":"2115a6ba-c1ea-45f6-a340-7ccd67a77bbd","Type":"ContainerStarted","Data":"807ff41ddcb49e9cfcfd2f2716feaaee13bf76ca7f97e2de0ad939c99dd77f12"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.654487 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.656047 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" event={"ID":"dbb47a0b-1e01-47b7-b57f-20e2e908674e","Type":"ContainerStarted","Data":"1dd8f2d0fbebaa3723e133d4f691b474567c2d66b6f668e030ce22c5485e90fb"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.656180 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.658251 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" event={"ID":"96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b","Type":"ContainerStarted","Data":"cbbfb3f1b06a7e9da7453dd38ed3c581e1f163bbff675ed33da99306aa812b19"} Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.658288 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.670853 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" podStartSLOduration=5.362788778 podStartE2EDuration="15.670836961s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.509248283 +0000 UTC m=+865.123576233" lastFinishedPulling="2025-11-24 12:13:48.817296466 +0000 UTC m=+875.431624416" observedRunningTime="2025-11-24 12:13:51.669216805 +0000 UTC m=+878.283544745" watchObservedRunningTime="2025-11-24 12:13:51.670836961 +0000 UTC m=+878.285164911" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.690003 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" podStartSLOduration=5.363019425 podStartE2EDuration="15.689983153s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.488027731 +0000 UTC m=+865.102355681" lastFinishedPulling="2025-11-24 12:13:48.814991459 +0000 UTC m=+875.429319409" observedRunningTime="2025-11-24 12:13:51.68847717 +0000 UTC m=+878.302805120" watchObservedRunningTime="2025-11-24 12:13:51.689983153 +0000 UTC m=+878.304311103" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.707436 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" podStartSLOduration=5.33957566 podStartE2EDuration="15.707416275s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.509204382 +0000 UTC m=+865.123532332" lastFinishedPulling="2025-11-24 12:13:48.877044997 +0000 UTC m=+875.491372947" observedRunningTime="2025-11-24 12:13:51.705267913 +0000 UTC m=+878.319595863" watchObservedRunningTime="2025-11-24 12:13:51.707416275 +0000 UTC m=+878.321744225" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.730058 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" podStartSLOduration=5.006650885 podStartE2EDuration="15.730035057s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.08547941 +0000 UTC m=+864.699807360" lastFinishedPulling="2025-11-24 12:13:48.808863582 +0000 UTC m=+875.423191532" observedRunningTime="2025-11-24 12:13:51.725912438 +0000 UTC m=+878.340240398" watchObservedRunningTime="2025-11-24 12:13:51.730035057 +0000 UTC m=+878.344363007" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.753869 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" podStartSLOduration=5.466147767 podStartE2EDuration="15.753847133s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.521163486 +0000 UTC m=+865.135491436" lastFinishedPulling="2025-11-24 12:13:48.808862852 +0000 UTC m=+875.423190802" observedRunningTime="2025-11-24 12:13:51.748298574 +0000 UTC m=+878.362626524" watchObservedRunningTime="2025-11-24 12:13:51.753847133 +0000 UTC m=+878.368175083" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.769803 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" podStartSLOduration=5.459009042 podStartE2EDuration="15.769784983s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.505214887 +0000 UTC m=+865.119542837" lastFinishedPulling="2025-11-24 12:13:48.815990828 +0000 UTC m=+875.430318778" observedRunningTime="2025-11-24 12:13:51.767237119 +0000 UTC m=+878.381565069" watchObservedRunningTime="2025-11-24 12:13:51.769784983 +0000 UTC m=+878.384112933" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.815184 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" podStartSLOduration=4.7094107990000005 podStartE2EDuration="15.815165021s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:37.703025478 +0000 UTC m=+864.317353428" lastFinishedPulling="2025-11-24 12:13:48.8087797 +0000 UTC m=+875.423107650" observedRunningTime="2025-11-24 12:13:51.798427558 +0000 UTC m=+878.412755508" watchObservedRunningTime="2025-11-24 12:13:51.815165021 +0000 UTC m=+878.429492971" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.815808 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" podStartSLOduration=5.083683545 podStartE2EDuration="15.815801089s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.085195452 +0000 UTC m=+864.699523402" lastFinishedPulling="2025-11-24 12:13:48.817313006 +0000 UTC m=+875.431640946" observedRunningTime="2025-11-24 12:13:51.814188822 +0000 UTC m=+878.428516772" watchObservedRunningTime="2025-11-24 12:13:51.815801089 +0000 UTC m=+878.430129039" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.837037 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" podStartSLOduration=5.522446148 podStartE2EDuration="15.837017689s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.505169745 +0000 UTC m=+865.119497695" lastFinishedPulling="2025-11-24 12:13:48.819741286 +0000 UTC m=+875.434069236" observedRunningTime="2025-11-24 12:13:51.833882869 +0000 UTC m=+878.448210829" watchObservedRunningTime="2025-11-24 12:13:51.837017689 +0000 UTC m=+878.451345639" Nov 24 12:13:51 crc kubenswrapper[4930]: I1124 12:13:51.858977 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" podStartSLOduration=5.981859849 podStartE2EDuration="15.858955312s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.931696017 +0000 UTC m=+865.546023967" lastFinishedPulling="2025-11-24 12:13:48.80879148 +0000 UTC m=+875.423119430" observedRunningTime="2025-11-24 12:13:51.852589888 +0000 UTC m=+878.466917838" watchObservedRunningTime="2025-11-24 12:13:51.858955312 +0000 UTC m=+878.473283262" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.541938 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-56s9w" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.616343 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-wqr7x" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.618182 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2jhpd" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.618249 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-wn7d4" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.627240 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-22kp5" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.692407 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-ngxgx" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.699259 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-4svhq" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.793848 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4ffrf" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.804886 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-kdw5m" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.919174 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-2m7pb" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.971338 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-z7ftj" Nov 24 12:13:56 crc kubenswrapper[4930]: I1124 12:13:56.978313 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-8kwlf" Nov 24 12:13:57 crc kubenswrapper[4930]: I1124 12:13:57.043641 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-g62mm" Nov 24 12:13:57 crc kubenswrapper[4930]: I1124 12:13:57.119599 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g2cfx" Nov 24 12:13:57 crc kubenswrapper[4930]: I1124 12:13:57.200180 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-qvfs7" Nov 24 12:13:59 crc kubenswrapper[4930]: I1124 12:13:59.865425 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" event={"ID":"6de96fac-ce97-4bec-a2af-f50f839454ea","Type":"ContainerStarted","Data":"32fed1fc02c4f92e38b74372d86f5d2bc4cd520a2b0bd4489ae9729d8ad2a168"} Nov 24 12:13:59 crc kubenswrapper[4930]: I1124 12:13:59.867996 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" event={"ID":"f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9","Type":"ContainerStarted","Data":"e204ed7a079847ac4529c2134770fd8f875a666cc47d89d34fd12b715cd9a8a9"} Nov 24 12:14:00 crc kubenswrapper[4930]: I1124 12:14:00.925175 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" Nov 24 12:14:00 crc kubenswrapper[4930]: I1124 12:14:00.959818 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" podStartSLOduration=5.089679527 podStartE2EDuration="24.959785374s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.875498378 +0000 UTC m=+865.489826318" lastFinishedPulling="2025-11-24 12:13:58.745604215 +0000 UTC m=+885.359932165" observedRunningTime="2025-11-24 12:14:00.959470795 +0000 UTC m=+887.573798795" watchObservedRunningTime="2025-11-24 12:14:00.959785374 +0000 UTC m=+887.574113324" Nov 24 12:14:01 crc kubenswrapper[4930]: I1124 12:14:01.000612 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" podStartSLOduration=4.816958299 podStartE2EDuration="25.000575s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.869352171 +0000 UTC m=+865.483680121" lastFinishedPulling="2025-11-24 12:13:59.052968872 +0000 UTC m=+885.667296822" observedRunningTime="2025-11-24 12:14:00.992331992 +0000 UTC m=+887.606659942" watchObservedRunningTime="2025-11-24 12:14:01.000575 +0000 UTC m=+887.614902960" Nov 24 12:14:01 crc kubenswrapper[4930]: I1124 12:14:01.938732 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" event={"ID":"f7031ec9-a046-4f1f-93e0-a6da41013d68","Type":"ContainerStarted","Data":"570c58fd8e802233ed04745942be99f4acbfad3e972141f94d99acb99157d051"} Nov 24 12:14:01 crc kubenswrapper[4930]: I1124 12:14:01.939587 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" Nov 24 12:14:01 crc kubenswrapper[4930]: I1124 12:14:01.964564 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" podStartSLOduration=3.8289114939999997 podStartE2EDuration="25.96452459s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.880609625 +0000 UTC m=+865.494937575" lastFinishedPulling="2025-11-24 12:14:01.016222721 +0000 UTC m=+887.630550671" observedRunningTime="2025-11-24 12:14:01.960644168 +0000 UTC m=+888.574972138" watchObservedRunningTime="2025-11-24 12:14:01.96452459 +0000 UTC m=+888.578852540" Nov 24 12:14:04 crc kubenswrapper[4930]: I1124 12:14:04.982164 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" event={"ID":"37344a1b-ea4d-4dcf-a803-3811a5626106","Type":"ContainerStarted","Data":"d50e94b13fd303099781c3071e318c968fc81acbd113f1a8d7ca906b4863d9b0"} Nov 24 12:14:04 crc kubenswrapper[4930]: I1124 12:14:04.983626 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" Nov 24 12:14:04 crc kubenswrapper[4930]: I1124 12:14:04.986988 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" event={"ID":"21e42885-6ebc-4b29-a2d1-32f64e257e11","Type":"ContainerStarted","Data":"9c2484955f4e0390354a981742cf587142d4baba3a436b5ee59bfc0434acd417"} Nov 24 12:14:04 crc kubenswrapper[4930]: I1124 12:14:04.987457 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" Nov 24 12:14:04 crc kubenswrapper[4930]: I1124 12:14:04.990787 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" event={"ID":"6db937f0-a6f1-44e0-87b8-cd4e2d645e24","Type":"ContainerStarted","Data":"6823ccc234e5adab5cbfa923c4271b962acf4e085f721e4dca3817489f2e7f4e"} Nov 24 12:14:04 crc kubenswrapper[4930]: I1124 12:14:04.991186 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" Nov 24 12:14:05 crc kubenswrapper[4930]: I1124 12:14:05.010893 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" podStartSLOduration=3.547196265 podStartE2EDuration="29.010866381s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.52824652 +0000 UTC m=+865.142574470" lastFinishedPulling="2025-11-24 12:14:03.991916636 +0000 UTC m=+890.606244586" observedRunningTime="2025-11-24 12:14:05.004215029 +0000 UTC m=+891.618542999" watchObservedRunningTime="2025-11-24 12:14:05.010866381 +0000 UTC m=+891.625194331" Nov 24 12:14:05 crc kubenswrapper[4930]: I1124 12:14:05.026002 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" podStartSLOduration=3.892093444 podStartE2EDuration="29.025974126s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.88426793 +0000 UTC m=+865.498595870" lastFinishedPulling="2025-11-24 12:14:04.018148602 +0000 UTC m=+890.632476552" observedRunningTime="2025-11-24 12:14:05.022887457 +0000 UTC m=+891.637215397" watchObservedRunningTime="2025-11-24 12:14:05.025974126 +0000 UTC m=+891.640302076" Nov 24 12:14:05 crc kubenswrapper[4930]: I1124 12:14:05.043195 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" podStartSLOduration=3.854394448 podStartE2EDuration="29.04311914s" podCreationTimestamp="2025-11-24 12:13:36 +0000 UTC" firstStartedPulling="2025-11-24 12:13:38.867336152 +0000 UTC m=+865.481664092" lastFinishedPulling="2025-11-24 12:14:04.056060834 +0000 UTC m=+890.670388784" observedRunningTime="2025-11-24 12:14:05.042667087 +0000 UTC m=+891.656995037" watchObservedRunningTime="2025-11-24 12:14:05.04311914 +0000 UTC m=+891.657447080" Nov 24 12:14:07 crc kubenswrapper[4930]: I1124 12:14:07.265400 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-kzwpc" Nov 24 12:14:07 crc kubenswrapper[4930]: I1124 12:14:07.337770 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-27cpb" Nov 24 12:14:07 crc kubenswrapper[4930]: I1124 12:14:07.688366 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:14:07 crc kubenswrapper[4930]: I1124 12:14:07.693759 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq" Nov 24 12:14:17 crc kubenswrapper[4930]: I1124 12:14:17.045925 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-d9wbt" Nov 24 12:14:17 crc kubenswrapper[4930]: I1124 12:14:17.414008 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8464cf66df-f2q9m" Nov 24 12:14:17 crc kubenswrapper[4930]: I1124 12:14:17.461108 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-2zd5j" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.273935 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-4wgvw"] Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.275472 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.282051 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5qqvz" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.283975 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.284306 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.289664 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.339962 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-4wgvw"] Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.363742 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-config\") pod \"dnsmasq-dns-7bdd77c89-4wgvw\" (UID: \"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8\") " pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.363823 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48vz4\" (UniqueName: \"kubernetes.io/projected/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-kube-api-access-48vz4\") pod \"dnsmasq-dns-7bdd77c89-4wgvw\" (UID: \"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8\") " pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.368657 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6584b49599-lc6s4"] Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.370516 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.375971 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.380870 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-lc6s4"] Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.465890 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-config\") pod \"dnsmasq-dns-7bdd77c89-4wgvw\" (UID: \"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8\") " pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.465994 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48vz4\" (UniqueName: \"kubernetes.io/projected/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-kube-api-access-48vz4\") pod \"dnsmasq-dns-7bdd77c89-4wgvw\" (UID: \"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8\") " pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.466052 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-config\") pod \"dnsmasq-dns-6584b49599-lc6s4\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.466081 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8v89\" (UniqueName: \"kubernetes.io/projected/5920228b-413d-4ce2-8dcb-df479ff3d797-kube-api-access-m8v89\") pod \"dnsmasq-dns-6584b49599-lc6s4\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.466404 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-dns-svc\") pod \"dnsmasq-dns-6584b49599-lc6s4\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.467244 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-config\") pod \"dnsmasq-dns-7bdd77c89-4wgvw\" (UID: \"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8\") " pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.489838 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48vz4\" (UniqueName: \"kubernetes.io/projected/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-kube-api-access-48vz4\") pod \"dnsmasq-dns-7bdd77c89-4wgvw\" (UID: \"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8\") " pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.567965 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-dns-svc\") pod \"dnsmasq-dns-6584b49599-lc6s4\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.568066 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-config\") pod \"dnsmasq-dns-6584b49599-lc6s4\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.568088 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8v89\" (UniqueName: \"kubernetes.io/projected/5920228b-413d-4ce2-8dcb-df479ff3d797-kube-api-access-m8v89\") pod \"dnsmasq-dns-6584b49599-lc6s4\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.569077 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-dns-svc\") pod \"dnsmasq-dns-6584b49599-lc6s4\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.569462 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-config\") pod \"dnsmasq-dns-6584b49599-lc6s4\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.584830 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8v89\" (UniqueName: \"kubernetes.io/projected/5920228b-413d-4ce2-8dcb-df479ff3d797-kube-api-access-m8v89\") pod \"dnsmasq-dns-6584b49599-lc6s4\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.641681 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" Nov 24 12:14:35 crc kubenswrapper[4930]: I1124 12:14:35.692733 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.135379 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-lc6s4"] Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.170736 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-42wpq"] Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.172047 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.182759 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-42wpq"] Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.280602 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-config\") pod \"dnsmasq-dns-6d8746976c-42wpq\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.280689 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jldmt\" (UniqueName: \"kubernetes.io/projected/acf2e767-1d50-416b-aa31-16a1a6ee631c-kube-api-access-jldmt\") pod \"dnsmasq-dns-6d8746976c-42wpq\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.280715 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-dns-svc\") pod \"dnsmasq-dns-6d8746976c-42wpq\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.381954 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-config\") pod \"dnsmasq-dns-6d8746976c-42wpq\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.382046 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jldmt\" (UniqueName: \"kubernetes.io/projected/acf2e767-1d50-416b-aa31-16a1a6ee631c-kube-api-access-jldmt\") pod \"dnsmasq-dns-6d8746976c-42wpq\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.382072 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-dns-svc\") pod \"dnsmasq-dns-6d8746976c-42wpq\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.383029 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-dns-svc\") pod \"dnsmasq-dns-6d8746976c-42wpq\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.383675 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-config\") pod \"dnsmasq-dns-6d8746976c-42wpq\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.405687 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jldmt\" (UniqueName: \"kubernetes.io/projected/acf2e767-1d50-416b-aa31-16a1a6ee631c-kube-api-access-jldmt\") pod \"dnsmasq-dns-6d8746976c-42wpq\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.461697 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-4wgvw"] Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.475129 4930 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.504403 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.615925 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-lc6s4"] Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.857808 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-4wgvw"] Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.903426 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-9k44v"] Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.904854 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.907959 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-42wpq"] Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.924579 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-9k44v"] Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.991491 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-dns-svc\") pod \"dnsmasq-dns-6486446b9f-9k44v\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.991588 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx662\" (UniqueName: \"kubernetes.io/projected/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-kube-api-access-vx662\") pod \"dnsmasq-dns-6486446b9f-9k44v\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:36 crc kubenswrapper[4930]: I1124 12:14:36.991773 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-config\") pod \"dnsmasq-dns-6486446b9f-9k44v\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.093660 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-config\") pod \"dnsmasq-dns-6486446b9f-9k44v\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.093786 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-dns-svc\") pod \"dnsmasq-dns-6486446b9f-9k44v\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.093858 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx662\" (UniqueName: \"kubernetes.io/projected/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-kube-api-access-vx662\") pod \"dnsmasq-dns-6486446b9f-9k44v\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.094731 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-config\") pod \"dnsmasq-dns-6486446b9f-9k44v\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.095069 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-dns-svc\") pod \"dnsmasq-dns-6486446b9f-9k44v\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.117315 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx662\" (UniqueName: \"kubernetes.io/projected/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-kube-api-access-vx662\") pod \"dnsmasq-dns-6486446b9f-9k44v\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.253156 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8746976c-42wpq" event={"ID":"acf2e767-1d50-416b-aa31-16a1a6ee631c","Type":"ContainerStarted","Data":"f7e6c46c74cacc0c2320154631f2ad4b4e373478a1398119eeb737d2f61cbbef"} Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.256892 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" event={"ID":"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8","Type":"ContainerStarted","Data":"5bc35a9cb5fe0d67dc792c7ceebc3c93aada1176ef3004c8fa4c48c4b31499e3"} Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.260785 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-lc6s4" event={"ID":"5920228b-413d-4ce2-8dcb-df479ff3d797","Type":"ContainerStarted","Data":"ebbc959e3a343b07d66a2b27bb95132ac67ec82fa2d1278a6f262ac141fcbcd1"} Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.283959 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.338084 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.339330 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.341257 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.341707 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.341825 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.341991 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.342287 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.343622 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bbrsh" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.343820 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.369206 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397366 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397435 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397465 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397513 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d35e6340-889e-4150-90c7-059417befffd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397583 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397623 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397656 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397698 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d35e6340-889e-4150-90c7-059417befffd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397727 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397752 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.397779 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkc4h\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-kube-api-access-bkc4h\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.510835 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.512103 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515559 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515617 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515705 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d35e6340-889e-4150-90c7-059417befffd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515752 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515796 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515826 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515888 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d35e6340-889e-4150-90c7-059417befffd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515909 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515938 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515949 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.515961 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkc4h\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-kube-api-access-bkc4h\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.517772 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.518643 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.521356 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.523046 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.523315 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.524012 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.542014 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d35e6340-889e-4150-90c7-059417befffd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.542134 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d35e6340-889e-4150-90c7-059417befffd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.546927 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkc4h\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-kube-api-access-bkc4h\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.550670 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.679886 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:14:37 crc kubenswrapper[4930]: I1124 12:14:37.847472 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-9k44v"] Nov 24 12:14:37 crc kubenswrapper[4930]: W1124 12:14:37.869962 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbd5e1f6_e854_4370_8ae6_d23fb6fc083d.slice/crio-b489f1362591ef3303d2b1477bc4e58b9e834cdc7f588ac801e13edc085d4d38 WatchSource:0}: Error finding container b489f1362591ef3303d2b1477bc4e58b9e834cdc7f588ac801e13edc085d4d38: Status 404 returned error can't find the container with id b489f1362591ef3303d2b1477bc4e58b9e834cdc7f588ac801e13edc085d4d38 Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.049620 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.058858 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.060952 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.061351 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xm22l" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.063250 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.063427 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.064181 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.065784 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.066990 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.067406 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.141447 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.144505 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.144578 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/270a64e1-2837-47ac-860f-d616efdc6bbc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.144945 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.145039 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/270a64e1-2837-47ac-860f-d616efdc6bbc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.145167 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.145317 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zsxm\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-kube-api-access-9zsxm\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.147004 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-config-data\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.147105 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.147212 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.147301 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.249588 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.249677 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.249744 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.249832 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.249862 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/270a64e1-2837-47ac-860f-d616efdc6bbc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.249897 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.249915 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/270a64e1-2837-47ac-860f-d616efdc6bbc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.249942 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.249957 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zsxm\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-kube-api-access-9zsxm\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.249984 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-config-data\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.250001 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.250275 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.251393 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.253732 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-config-data\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.253877 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.254168 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.254867 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.257586 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/270a64e1-2837-47ac-860f-d616efdc6bbc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.258215 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.260007 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.271393 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/270a64e1-2837-47ac-860f-d616efdc6bbc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.274770 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zsxm\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-kube-api-access-9zsxm\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.292579 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-9k44v" event={"ID":"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d","Type":"ContainerStarted","Data":"b489f1362591ef3303d2b1477bc4e58b9e834cdc7f588ac801e13edc085d4d38"} Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.314516 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.315819 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " pod="openstack/rabbitmq-server-0" Nov 24 12:14:38 crc kubenswrapper[4930]: W1124 12:14:38.330427 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd35e6340_889e_4150_90c7_059417befffd.slice/crio-ac084088b65d3dff4cdccbae0f3337d5962164e31efd9fe0b91c54047cc39773 WatchSource:0}: Error finding container ac084088b65d3dff4cdccbae0f3337d5962164e31efd9fe0b91c54047cc39773: Status 404 returned error can't find the container with id ac084088b65d3dff4cdccbae0f3337d5962164e31efd9fe0b91c54047cc39773 Nov 24 12:14:38 crc kubenswrapper[4930]: I1124 12:14:38.395499 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.051787 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.309860 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"270a64e1-2837-47ac-860f-d616efdc6bbc","Type":"ContainerStarted","Data":"fcedf2cd3937e11e0a8aee329b99961336141df5df74c53e05396f9a0a658b44"} Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.314606 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d35e6340-889e-4150-90c7-059417befffd","Type":"ContainerStarted","Data":"ac084088b65d3dff4cdccbae0f3337d5962164e31efd9fe0b91c54047cc39773"} Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.611886 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.621748 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.623900 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.625052 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6dtsk" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.625146 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.625399 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.631935 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.651127 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.698150 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzq2v\" (UniqueName: \"kubernetes.io/projected/bddca103-daee-4f61-9165-1f6ec4762bd1-kube-api-access-mzq2v\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.698213 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bddca103-daee-4f61-9165-1f6ec4762bd1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.698255 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bddca103-daee-4f61-9165-1f6ec4762bd1-kolla-config\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.698274 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddca103-daee-4f61-9165-1f6ec4762bd1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.698475 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.698512 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bddca103-daee-4f61-9165-1f6ec4762bd1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.698558 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bddca103-daee-4f61-9165-1f6ec4762bd1-config-data-default\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.698586 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddca103-daee-4f61-9165-1f6ec4762bd1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.800752 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzq2v\" (UniqueName: \"kubernetes.io/projected/bddca103-daee-4f61-9165-1f6ec4762bd1-kube-api-access-mzq2v\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.800876 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bddca103-daee-4f61-9165-1f6ec4762bd1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.800950 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bddca103-daee-4f61-9165-1f6ec4762bd1-kolla-config\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.800976 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddca103-daee-4f61-9165-1f6ec4762bd1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.802947 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.802997 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bddca103-daee-4f61-9165-1f6ec4762bd1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.803090 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bddca103-daee-4f61-9165-1f6ec4762bd1-config-data-default\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.803123 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddca103-daee-4f61-9165-1f6ec4762bd1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.804831 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.806193 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bddca103-daee-4f61-9165-1f6ec4762bd1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.806800 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bddca103-daee-4f61-9165-1f6ec4762bd1-kolla-config\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.807229 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bddca103-daee-4f61-9165-1f6ec4762bd1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.813451 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bddca103-daee-4f61-9165-1f6ec4762bd1-config-data-default\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.826187 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddca103-daee-4f61-9165-1f6ec4762bd1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.830862 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddca103-daee-4f61-9165-1f6ec4762bd1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.830887 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.842895 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzq2v\" (UniqueName: \"kubernetes.io/projected/bddca103-daee-4f61-9165-1f6ec4762bd1-kube-api-access-mzq2v\") pod \"openstack-galera-0\" (UID: \"bddca103-daee-4f61-9165-1f6ec4762bd1\") " pod="openstack/openstack-galera-0" Nov 24 12:14:39 crc kubenswrapper[4930]: I1124 12:14:39.945883 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 12:14:40 crc kubenswrapper[4930]: I1124 12:14:40.629522 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 12:14:40 crc kubenswrapper[4930]: W1124 12:14:40.647746 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbddca103_daee_4f61_9165_1f6ec4762bd1.slice/crio-07b3753bf7dacba4fe76aba85f5fd6c1c82037442abfd851b1a8b62ade5acff9 WatchSource:0}: Error finding container 07b3753bf7dacba4fe76aba85f5fd6c1c82037442abfd851b1a8b62ade5acff9: Status 404 returned error can't find the container with id 07b3753bf7dacba4fe76aba85f5fd6c1c82037442abfd851b1a8b62ade5acff9 Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.191032 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.199165 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.204620 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.211841 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.212121 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-m26dm" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.212491 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.206343 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.290642 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.291047 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64612891-0a55-4622-8888-d141a949c665-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.291090 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/64612891-0a55-4622-8888-d141a949c665-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.291128 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/64612891-0a55-4622-8888-d141a949c665-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.291195 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxcr8\" (UniqueName: \"kubernetes.io/projected/64612891-0a55-4622-8888-d141a949c665-kube-api-access-qxcr8\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.291229 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64612891-0a55-4622-8888-d141a949c665-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.291282 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64612891-0a55-4622-8888-d141a949c665-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.291323 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/64612891-0a55-4622-8888-d141a949c665-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.382243 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bddca103-daee-4f61-9165-1f6ec4762bd1","Type":"ContainerStarted","Data":"07b3753bf7dacba4fe76aba85f5fd6c1c82037442abfd851b1a8b62ade5acff9"} Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.399372 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.400288 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxcr8\" (UniqueName: \"kubernetes.io/projected/64612891-0a55-4622-8888-d141a949c665-kube-api-access-qxcr8\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.400336 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64612891-0a55-4622-8888-d141a949c665-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.400419 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64612891-0a55-4622-8888-d141a949c665-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.401271 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64612891-0a55-4622-8888-d141a949c665-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.401367 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/64612891-0a55-4622-8888-d141a949c665-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.401645 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/64612891-0a55-4622-8888-d141a949c665-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.401733 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.401806 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64612891-0a55-4622-8888-d141a949c665-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.401832 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/64612891-0a55-4622-8888-d141a949c665-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.402281 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.409751 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64612891-0a55-4622-8888-d141a949c665-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.411920 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/64612891-0a55-4622-8888-d141a949c665-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.412057 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/64612891-0a55-4622-8888-d141a949c665-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.418085 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.418194 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.441680 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/64612891-0a55-4622-8888-d141a949c665-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.441849 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.442011 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.442068 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-txhk4" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.452803 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxcr8\" (UniqueName: \"kubernetes.io/projected/64612891-0a55-4622-8888-d141a949c665-kube-api-access-qxcr8\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.458212 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64612891-0a55-4622-8888-d141a949c665-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.471750 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"64612891-0a55-4622-8888-d141a949c665\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.513968 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.514078 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-kolla-config\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.514581 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqjlt\" (UniqueName: \"kubernetes.io/projected/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-kube-api-access-fqjlt\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.514766 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.514916 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-config-data\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.542173 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.618608 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.618807 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-config-data\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.618887 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.618937 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-kolla-config\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.619086 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqjlt\" (UniqueName: \"kubernetes.io/projected/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-kube-api-access-fqjlt\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.619994 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-config-data\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.620660 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-kolla-config\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.636908 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.638361 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.651961 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqjlt\" (UniqueName: \"kubernetes.io/projected/ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa-kube-api-access-fqjlt\") pod \"memcached-0\" (UID: \"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa\") " pod="openstack/memcached-0" Nov 24 12:14:41 crc kubenswrapper[4930]: I1124 12:14:41.905064 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 12:14:42 crc kubenswrapper[4930]: I1124 12:14:42.193214 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 12:14:42 crc kubenswrapper[4930]: I1124 12:14:42.392902 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"64612891-0a55-4622-8888-d141a949c665","Type":"ContainerStarted","Data":"3e18fa4299437275ef03702d3802c6dcfddd26c5c4ff347ee05efe0019da00cc"} Nov 24 12:14:42 crc kubenswrapper[4930]: I1124 12:14:42.571266 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 12:14:42 crc kubenswrapper[4930]: W1124 12:14:42.617896 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca5fe78e_8ed4_4f0f_ae80_7760b1bb5afa.slice/crio-e4466797977d8a0d4d05a60d7e541c1b9d2666e9124c0c42ce348d566f7ff18e WatchSource:0}: Error finding container e4466797977d8a0d4d05a60d7e541c1b9d2666e9124c0c42ce348d566f7ff18e: Status 404 returned error can't find the container with id e4466797977d8a0d4d05a60d7e541c1b9d2666e9124c0c42ce348d566f7ff18e Nov 24 12:14:43 crc kubenswrapper[4930]: I1124 12:14:43.418797 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa","Type":"ContainerStarted","Data":"e4466797977d8a0d4d05a60d7e541c1b9d2666e9124c0c42ce348d566f7ff18e"} Nov 24 12:14:43 crc kubenswrapper[4930]: I1124 12:14:43.491960 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:14:43 crc kubenswrapper[4930]: I1124 12:14:43.493162 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:14:43 crc kubenswrapper[4930]: I1124 12:14:43.499558 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bdd4z" Nov 24 12:14:43 crc kubenswrapper[4930]: I1124 12:14:43.500366 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:14:43 crc kubenswrapper[4930]: I1124 12:14:43.583522 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmvfb\" (UniqueName: \"kubernetes.io/projected/7cce9366-d1b8-46ab-8ceb-05f6b71348f1-kube-api-access-jmvfb\") pod \"kube-state-metrics-0\" (UID: \"7cce9366-d1b8-46ab-8ceb-05f6b71348f1\") " pod="openstack/kube-state-metrics-0" Nov 24 12:14:43 crc kubenswrapper[4930]: I1124 12:14:43.684692 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmvfb\" (UniqueName: \"kubernetes.io/projected/7cce9366-d1b8-46ab-8ceb-05f6b71348f1-kube-api-access-jmvfb\") pod \"kube-state-metrics-0\" (UID: \"7cce9366-d1b8-46ab-8ceb-05f6b71348f1\") " pod="openstack/kube-state-metrics-0" Nov 24 12:14:43 crc kubenswrapper[4930]: I1124 12:14:43.725321 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmvfb\" (UniqueName: \"kubernetes.io/projected/7cce9366-d1b8-46ab-8ceb-05f6b71348f1-kube-api-access-jmvfb\") pod \"kube-state-metrics-0\" (UID: \"7cce9366-d1b8-46ab-8ceb-05f6b71348f1\") " pod="openstack/kube-state-metrics-0" Nov 24 12:14:43 crc kubenswrapper[4930]: I1124 12:14:43.820767 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.498997 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.501287 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.508301 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.510385 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.510427 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.510449 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2ztbj" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.510626 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.512370 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.656454 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3118a4f6-bfb6-4646-a543-2f2dcbf03681-config\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.656534 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3118a4f6-bfb6-4646-a543-2f2dcbf03681-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.656628 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.656779 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3118a4f6-bfb6-4646-a543-2f2dcbf03681-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.656903 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3118a4f6-bfb6-4646-a543-2f2dcbf03681-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.656971 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvrm\" (UniqueName: \"kubernetes.io/projected/3118a4f6-bfb6-4646-a543-2f2dcbf03681-kube-api-access-fhvrm\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.657025 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3118a4f6-bfb6-4646-a543-2f2dcbf03681-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.657202 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3118a4f6-bfb6-4646-a543-2f2dcbf03681-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.758639 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3118a4f6-bfb6-4646-a543-2f2dcbf03681-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.758719 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3118a4f6-bfb6-4646-a543-2f2dcbf03681-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.758758 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3118a4f6-bfb6-4646-a543-2f2dcbf03681-config\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.758778 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3118a4f6-bfb6-4646-a543-2f2dcbf03681-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.758804 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.758835 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3118a4f6-bfb6-4646-a543-2f2dcbf03681-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.758863 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3118a4f6-bfb6-4646-a543-2f2dcbf03681-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.758889 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvrm\" (UniqueName: \"kubernetes.io/projected/3118a4f6-bfb6-4646-a543-2f2dcbf03681-kube-api-access-fhvrm\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.759595 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.759734 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3118a4f6-bfb6-4646-a543-2f2dcbf03681-config\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.759923 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3118a4f6-bfb6-4646-a543-2f2dcbf03681-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.766972 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3118a4f6-bfb6-4646-a543-2f2dcbf03681-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.770780 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3118a4f6-bfb6-4646-a543-2f2dcbf03681-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.773941 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3118a4f6-bfb6-4646-a543-2f2dcbf03681-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.777205 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3118a4f6-bfb6-4646-a543-2f2dcbf03681-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.789006 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvrm\" (UniqueName: \"kubernetes.io/projected/3118a4f6-bfb6-4646-a543-2f2dcbf03681-kube-api-access-fhvrm\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.806137 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3118a4f6-bfb6-4646-a543-2f2dcbf03681\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:47 crc kubenswrapper[4930]: I1124 12:14:47.821759 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.027521 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-r7nwq"] Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.028493 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.031106 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.031556 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.033254 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-vnsnn" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.043673 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-q5rmd"] Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.045338 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.048804 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r7nwq"] Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.075945 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-q5rmd"] Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167005 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47adcfa9-c402-4f40-b558-bb2a56d93293-scripts\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167087 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce96cb2b-064b-4d76-a101-df9f31c86314-scripts\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167114 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce96cb2b-064b-4d76-a101-df9f31c86314-var-log-ovn\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167142 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2x8h\" (UniqueName: \"kubernetes.io/projected/ce96cb2b-064b-4d76-a101-df9f31c86314-kube-api-access-b2x8h\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167197 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tpbn\" (UniqueName: \"kubernetes.io/projected/47adcfa9-c402-4f40-b558-bb2a56d93293-kube-api-access-8tpbn\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167233 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce96cb2b-064b-4d76-a101-df9f31c86314-var-run\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167276 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-var-run\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167316 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-etc-ovs\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167335 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-var-lib\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167365 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce96cb2b-064b-4d76-a101-df9f31c86314-var-run-ovn\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167386 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-var-log\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167424 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce96cb2b-064b-4d76-a101-df9f31c86314-combined-ca-bundle\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.167475 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce96cb2b-064b-4d76-a101-df9f31c86314-ovn-controller-tls-certs\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269039 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-var-run\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269165 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-etc-ovs\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269198 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-var-lib\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269228 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce96cb2b-064b-4d76-a101-df9f31c86314-var-run-ovn\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269253 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-var-log\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269283 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce96cb2b-064b-4d76-a101-df9f31c86314-combined-ca-bundle\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269306 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce96cb2b-064b-4d76-a101-df9f31c86314-ovn-controller-tls-certs\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269353 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47adcfa9-c402-4f40-b558-bb2a56d93293-scripts\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269378 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce96cb2b-064b-4d76-a101-df9f31c86314-scripts\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269407 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce96cb2b-064b-4d76-a101-df9f31c86314-var-log-ovn\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269448 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2x8h\" (UniqueName: \"kubernetes.io/projected/ce96cb2b-064b-4d76-a101-df9f31c86314-kube-api-access-b2x8h\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269480 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tpbn\" (UniqueName: \"kubernetes.io/projected/47adcfa9-c402-4f40-b558-bb2a56d93293-kube-api-access-8tpbn\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.269516 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce96cb2b-064b-4d76-a101-df9f31c86314-var-run\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.270329 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce96cb2b-064b-4d76-a101-df9f31c86314-var-run\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.270725 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce96cb2b-064b-4d76-a101-df9f31c86314-var-log-ovn\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.270807 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-var-log\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.270807 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce96cb2b-064b-4d76-a101-df9f31c86314-var-run-ovn\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.270981 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-etc-ovs\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.271107 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-var-lib\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.271242 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47adcfa9-c402-4f40-b558-bb2a56d93293-var-run\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.273076 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47adcfa9-c402-4f40-b558-bb2a56d93293-scripts\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.273110 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce96cb2b-064b-4d76-a101-df9f31c86314-scripts\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.288150 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce96cb2b-064b-4d76-a101-df9f31c86314-ovn-controller-tls-certs\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.288180 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce96cb2b-064b-4d76-a101-df9f31c86314-combined-ca-bundle\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.301286 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2x8h\" (UniqueName: \"kubernetes.io/projected/ce96cb2b-064b-4d76-a101-df9f31c86314-kube-api-access-b2x8h\") pod \"ovn-controller-r7nwq\" (UID: \"ce96cb2b-064b-4d76-a101-df9f31c86314\") " pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.304128 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tpbn\" (UniqueName: \"kubernetes.io/projected/47adcfa9-c402-4f40-b558-bb2a56d93293-kube-api-access-8tpbn\") pod \"ovn-controller-ovs-q5rmd\" (UID: \"47adcfa9-c402-4f40-b558-bb2a56d93293\") " pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.382804 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r7nwq" Nov 24 12:14:48 crc kubenswrapper[4930]: I1124 12:14:48.392318 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:14:50 crc kubenswrapper[4930]: I1124 12:14:50.942157 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 12:14:50 crc kubenswrapper[4930]: I1124 12:14:50.943929 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:50 crc kubenswrapper[4930]: I1124 12:14:50.948210 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 12:14:50 crc kubenswrapper[4930]: I1124 12:14:50.948295 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 12:14:50 crc kubenswrapper[4930]: I1124 12:14:50.948448 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6l4jj" Nov 24 12:14:50 crc kubenswrapper[4930]: I1124 12:14:50.948810 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 12:14:50 crc kubenswrapper[4930]: I1124 12:14:50.955019 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.012340 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abae5d96-d4bd-42db-8517-ac6defbb22f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.013208 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/abae5d96-d4bd-42db-8517-ac6defbb22f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.013244 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abae5d96-d4bd-42db-8517-ac6defbb22f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.013282 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abae5d96-d4bd-42db-8517-ac6defbb22f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.013310 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/abae5d96-d4bd-42db-8517-ac6defbb22f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.013332 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgc5m\" (UniqueName: \"kubernetes.io/projected/abae5d96-d4bd-42db-8517-ac6defbb22f2-kube-api-access-dgc5m\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.013424 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.013501 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abae5d96-d4bd-42db-8517-ac6defbb22f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.115242 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.115319 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abae5d96-d4bd-42db-8517-ac6defbb22f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.115384 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abae5d96-d4bd-42db-8517-ac6defbb22f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.115427 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/abae5d96-d4bd-42db-8517-ac6defbb22f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.115450 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abae5d96-d4bd-42db-8517-ac6defbb22f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.115473 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abae5d96-d4bd-42db-8517-ac6defbb22f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.115496 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/abae5d96-d4bd-42db-8517-ac6defbb22f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.115514 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgc5m\" (UniqueName: \"kubernetes.io/projected/abae5d96-d4bd-42db-8517-ac6defbb22f2-kube-api-access-dgc5m\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.115679 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.116012 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/abae5d96-d4bd-42db-8517-ac6defbb22f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.116737 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abae5d96-d4bd-42db-8517-ac6defbb22f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.118193 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abae5d96-d4bd-42db-8517-ac6defbb22f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.121237 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abae5d96-d4bd-42db-8517-ac6defbb22f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.123805 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abae5d96-d4bd-42db-8517-ac6defbb22f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.129271 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/abae5d96-d4bd-42db-8517-ac6defbb22f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.137284 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.141108 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgc5m\" (UniqueName: \"kubernetes.io/projected/abae5d96-d4bd-42db-8517-ac6defbb22f2-kube-api-access-dgc5m\") pod \"ovsdbserver-sb-0\" (UID: \"abae5d96-d4bd-42db-8517-ac6defbb22f2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:51 crc kubenswrapper[4930]: I1124 12:14:51.264770 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 12:14:57 crc kubenswrapper[4930]: E1124 12:14:57.114810 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce" Nov 24 12:14:57 crc kubenswrapper[4930]: E1124 12:14:57.115692 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzq2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(bddca103-daee-4f61-9165-1f6ec4762bd1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:14:57 crc kubenswrapper[4930]: E1124 12:14:57.117155 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="bddca103-daee-4f61-9165-1f6ec4762bd1" Nov 24 12:14:57 crc kubenswrapper[4930]: E1124 12:14:57.534119 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce\\\"\"" pod="openstack/openstack-galera-0" podUID="bddca103-daee-4f61-9165-1f6ec4762bd1" Nov 24 12:14:58 crc kubenswrapper[4930]: E1124 12:14:58.962882 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b" Nov 24 12:14:58 crc kubenswrapper[4930]: E1124 12:14:58.963374 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkc4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(d35e6340-889e-4150-90c7-059417befffd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:14:58 crc kubenswrapper[4930]: E1124 12:14:58.964661 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d35e6340-889e-4150-90c7-059417befffd" Nov 24 12:14:58 crc kubenswrapper[4930]: E1124 12:14:58.989271 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b" Nov 24 12:14:58 crc kubenswrapper[4930]: E1124 12:14:58.989518 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zsxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(270a64e1-2837-47ac-860f-d616efdc6bbc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:14:58 crc kubenswrapper[4930]: E1124 12:14:58.990732 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="270a64e1-2837-47ac-860f-d616efdc6bbc" Nov 24 12:14:59 crc kubenswrapper[4930]: E1124 12:14:59.547969 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d35e6340-889e-4150-90c7-059417befffd" Nov 24 12:14:59 crc kubenswrapper[4930]: E1124 12:14:59.548061 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b\\\"\"" pod="openstack/rabbitmq-server-0" podUID="270a64e1-2837-47ac-860f-d616efdc6bbc" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.149745 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr"] Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.151779 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.153884 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.155658 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.161223 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr"] Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.264556 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3963e6bb-dfea-4a47-9765-0203d3b7ed65-secret-volume\") pod \"collect-profiles-29399775-mz9mr\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.264648 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3963e6bb-dfea-4a47-9765-0203d3b7ed65-config-volume\") pod \"collect-profiles-29399775-mz9mr\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.264726 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mdxv\" (UniqueName: \"kubernetes.io/projected/3963e6bb-dfea-4a47-9765-0203d3b7ed65-kube-api-access-5mdxv\") pod \"collect-profiles-29399775-mz9mr\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.365794 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3963e6bb-dfea-4a47-9765-0203d3b7ed65-secret-volume\") pod \"collect-profiles-29399775-mz9mr\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.365901 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3963e6bb-dfea-4a47-9765-0203d3b7ed65-config-volume\") pod \"collect-profiles-29399775-mz9mr\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.365965 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mdxv\" (UniqueName: \"kubernetes.io/projected/3963e6bb-dfea-4a47-9765-0203d3b7ed65-kube-api-access-5mdxv\") pod \"collect-profiles-29399775-mz9mr\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.367055 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3963e6bb-dfea-4a47-9765-0203d3b7ed65-config-volume\") pod \"collect-profiles-29399775-mz9mr\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.373958 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3963e6bb-dfea-4a47-9765-0203d3b7ed65-secret-volume\") pod \"collect-profiles-29399775-mz9mr\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.386523 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mdxv\" (UniqueName: \"kubernetes.io/projected/3963e6bb-dfea-4a47-9765-0203d3b7ed65-kube-api-access-5mdxv\") pod \"collect-profiles-29399775-mz9mr\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:00 crc kubenswrapper[4930]: I1124 12:15:00.478338 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.057480 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.057928 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jldmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6d8746976c-42wpq_openstack(acf2e767-1d50-416b-aa31-16a1a6ee631c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.059158 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6d8746976c-42wpq" podUID="acf2e767-1d50-416b-aa31-16a1a6ee631c" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.244379 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.244836 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vx662,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6486446b9f-9k44v_openstack(dbd5e1f6-e854-4370-8ae6-d23fb6fc083d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.246286 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6486446b9f-9k44v" podUID="dbd5e1f6-e854-4370-8ae6-d23fb6fc083d" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.265421 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.265655 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48vz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bdd77c89-4wgvw_openstack(1d924ea1-bfb3-448d-8ee6-44a3d70d45f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.267089 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" podUID="1d924ea1-bfb3-448d-8ee6-44a3d70d45f8" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.281748 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.281918 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8v89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6584b49599-lc6s4_openstack(5920228b-413d-4ce2-8dcb-df479ff3d797): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.283118 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6584b49599-lc6s4" podUID="5920228b-413d-4ce2-8dcb-df479ff3d797" Nov 24 12:15:04 crc kubenswrapper[4930]: I1124 12:15:04.584014 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa","Type":"ContainerStarted","Data":"a748cf8411bcfd6326258464f4c148d3435a33f6706e4741f23f08b4ebd93295"} Nov 24 12:15:04 crc kubenswrapper[4930]: I1124 12:15:04.584599 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 12:15:04 crc kubenswrapper[4930]: I1124 12:15:04.587458 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"64612891-0a55-4622-8888-d141a949c665","Type":"ContainerStarted","Data":"c0321fbaa0723429c8755f1c827a062e8cf2040985217cb530041da80ea1ea6f"} Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.588898 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-6d8746976c-42wpq" podUID="acf2e767-1d50-416b-aa31-16a1a6ee631c" Nov 24 12:15:04 crc kubenswrapper[4930]: E1124 12:15:04.589214 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-6486446b9f-9k44v" podUID="dbd5e1f6-e854-4370-8ae6-d23fb6fc083d" Nov 24 12:15:04 crc kubenswrapper[4930]: I1124 12:15:04.611306 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.03003321 podStartE2EDuration="23.611280788s" podCreationTimestamp="2025-11-24 12:14:41 +0000 UTC" firstStartedPulling="2025-11-24 12:14:42.626990966 +0000 UTC m=+929.241318916" lastFinishedPulling="2025-11-24 12:15:04.208238544 +0000 UTC m=+950.822566494" observedRunningTime="2025-11-24 12:15:04.600519248 +0000 UTC m=+951.214847198" watchObservedRunningTime="2025-11-24 12:15:04.611280788 +0000 UTC m=+951.225608728" Nov 24 12:15:04 crc kubenswrapper[4930]: I1124 12:15:04.724867 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:15:04 crc kubenswrapper[4930]: I1124 12:15:04.823935 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 12:15:04 crc kubenswrapper[4930]: W1124 12:15:04.829733 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3118a4f6_bfb6_4646_a543_2f2dcbf03681.slice/crio-4904fca6564c08408588ef27a92e2adbb504612badb6e57af03464989dbecf9a WatchSource:0}: Error finding container 4904fca6564c08408588ef27a92e2adbb504612badb6e57af03464989dbecf9a: Status 404 returned error can't find the container with id 4904fca6564c08408588ef27a92e2adbb504612badb6e57af03464989dbecf9a Nov 24 12:15:04 crc kubenswrapper[4930]: I1124 12:15:04.983922 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.034801 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r7nwq"] Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.036552 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.043393 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr"] Nov 24 12:15:05 crc kubenswrapper[4930]: W1124 12:15:05.061696 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3963e6bb_dfea_4a47_9765_0203d3b7ed65.slice/crio-4a47f5593a1ca2161c473ff8bf04fe36001c2b8d30a1f4b501827b1d947f8362 WatchSource:0}: Error finding container 4a47f5593a1ca2161c473ff8bf04fe36001c2b8d30a1f4b501827b1d947f8362: Status 404 returned error can't find the container with id 4a47f5593a1ca2161c473ff8bf04fe36001c2b8d30a1f4b501827b1d947f8362 Nov 24 12:15:05 crc kubenswrapper[4930]: W1124 12:15:05.061937 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce96cb2b_064b_4d76_a101_df9f31c86314.slice/crio-ce89f88d5a7d0c04376f56a53839394a2d0ae77dbec70d605dd71680535a33e0 WatchSource:0}: Error finding container ce89f88d5a7d0c04376f56a53839394a2d0ae77dbec70d605dd71680535a33e0: Status 404 returned error can't find the container with id ce89f88d5a7d0c04376f56a53839394a2d0ae77dbec70d605dd71680535a33e0 Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.144473 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.147331 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48vz4\" (UniqueName: \"kubernetes.io/projected/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-kube-api-access-48vz4\") pod \"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8\" (UID: \"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8\") " Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.147669 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-dns-svc\") pod \"5920228b-413d-4ce2-8dcb-df479ff3d797\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.147725 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-config\") pod \"5920228b-413d-4ce2-8dcb-df479ff3d797\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.147812 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-config\") pod \"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8\" (UID: \"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8\") " Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.147857 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8v89\" (UniqueName: \"kubernetes.io/projected/5920228b-413d-4ce2-8dcb-df479ff3d797-kube-api-access-m8v89\") pod \"5920228b-413d-4ce2-8dcb-df479ff3d797\" (UID: \"5920228b-413d-4ce2-8dcb-df479ff3d797\") " Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.148288 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-config" (OuterVolumeSpecName: "config") pod "5920228b-413d-4ce2-8dcb-df479ff3d797" (UID: "5920228b-413d-4ce2-8dcb-df479ff3d797"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.148341 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5920228b-413d-4ce2-8dcb-df479ff3d797" (UID: "5920228b-413d-4ce2-8dcb-df479ff3d797"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.148876 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.148940 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5920228b-413d-4ce2-8dcb-df479ff3d797-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.148984 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-config" (OuterVolumeSpecName: "config") pod "1d924ea1-bfb3-448d-8ee6-44a3d70d45f8" (UID: "1d924ea1-bfb3-448d-8ee6-44a3d70d45f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.154059 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-kube-api-access-48vz4" (OuterVolumeSpecName: "kube-api-access-48vz4") pod "1d924ea1-bfb3-448d-8ee6-44a3d70d45f8" (UID: "1d924ea1-bfb3-448d-8ee6-44a3d70d45f8"). InnerVolumeSpecName "kube-api-access-48vz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.156182 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5920228b-413d-4ce2-8dcb-df479ff3d797-kube-api-access-m8v89" (OuterVolumeSpecName: "kube-api-access-m8v89") pod "5920228b-413d-4ce2-8dcb-df479ff3d797" (UID: "5920228b-413d-4ce2-8dcb-df479ff3d797"). InnerVolumeSpecName "kube-api-access-m8v89". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.252147 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.252209 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8v89\" (UniqueName: \"kubernetes.io/projected/5920228b-413d-4ce2-8dcb-df479ff3d797-kube-api-access-m8v89\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.252225 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48vz4\" (UniqueName: \"kubernetes.io/projected/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8-kube-api-access-48vz4\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.593970 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-lc6s4" event={"ID":"5920228b-413d-4ce2-8dcb-df479ff3d797","Type":"ContainerDied","Data":"ebbc959e3a343b07d66a2b27bb95132ac67ec82fa2d1278a6f262ac141fcbcd1"} Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.594228 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-lc6s4" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.595281 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3118a4f6-bfb6-4646-a543-2f2dcbf03681","Type":"ContainerStarted","Data":"4904fca6564c08408588ef27a92e2adbb504612badb6e57af03464989dbecf9a"} Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.596651 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"abae5d96-d4bd-42db-8517-ac6defbb22f2","Type":"ContainerStarted","Data":"f562c2c1855ca1443924b0a74016767396b6058165d15e7b1596b406473d13f4"} Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.597708 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r7nwq" event={"ID":"ce96cb2b-064b-4d76-a101-df9f31c86314","Type":"ContainerStarted","Data":"ce89f88d5a7d0c04376f56a53839394a2d0ae77dbec70d605dd71680535a33e0"} Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.599517 4930 generic.go:334] "Generic (PLEG): container finished" podID="3963e6bb-dfea-4a47-9765-0203d3b7ed65" containerID="84037103c3c11b749e0515510a49bf3342b1a61a07cb3d2d13e722c47ea6ad27" exitCode=0 Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.599620 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" event={"ID":"3963e6bb-dfea-4a47-9765-0203d3b7ed65","Type":"ContainerDied","Data":"84037103c3c11b749e0515510a49bf3342b1a61a07cb3d2d13e722c47ea6ad27"} Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.599647 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" event={"ID":"3963e6bb-dfea-4a47-9765-0203d3b7ed65","Type":"ContainerStarted","Data":"4a47f5593a1ca2161c473ff8bf04fe36001c2b8d30a1f4b501827b1d947f8362"} Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.600855 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7cce9366-d1b8-46ab-8ceb-05f6b71348f1","Type":"ContainerStarted","Data":"7024730a843ea0c90a6f660ccb478c441ddc81867c9611de834d624326224c2f"} Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.602263 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" event={"ID":"1d924ea1-bfb3-448d-8ee6-44a3d70d45f8","Type":"ContainerDied","Data":"5bc35a9cb5fe0d67dc792c7ceebc3c93aada1176ef3004c8fa4c48c4b31499e3"} Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.602326 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-4wgvw" Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.664577 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-lc6s4"] Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.675305 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-lc6s4"] Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.707963 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-4wgvw"] Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.718391 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-4wgvw"] Nov 24 12:15:05 crc kubenswrapper[4930]: I1124 12:15:05.757347 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-q5rmd"] Nov 24 12:15:06 crc kubenswrapper[4930]: I1124 12:15:06.102715 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d924ea1-bfb3-448d-8ee6-44a3d70d45f8" path="/var/lib/kubelet/pods/1d924ea1-bfb3-448d-8ee6-44a3d70d45f8/volumes" Nov 24 12:15:06 crc kubenswrapper[4930]: I1124 12:15:06.103900 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5920228b-413d-4ce2-8dcb-df479ff3d797" path="/var/lib/kubelet/pods/5920228b-413d-4ce2-8dcb-df479ff3d797/volumes" Nov 24 12:15:06 crc kubenswrapper[4930]: I1124 12:15:06.651150 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5rmd" event={"ID":"47adcfa9-c402-4f40-b558-bb2a56d93293","Type":"ContainerStarted","Data":"68cd020b39bad61743d8984348c01f85714dd66d979dd1b4f06d3e9ae4079f47"} Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.160844 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.294326 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3963e6bb-dfea-4a47-9765-0203d3b7ed65-secret-volume\") pod \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.294929 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3963e6bb-dfea-4a47-9765-0203d3b7ed65-config-volume\") pod \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.295007 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mdxv\" (UniqueName: \"kubernetes.io/projected/3963e6bb-dfea-4a47-9765-0203d3b7ed65-kube-api-access-5mdxv\") pod \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\" (UID: \"3963e6bb-dfea-4a47-9765-0203d3b7ed65\") " Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.295748 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3963e6bb-dfea-4a47-9765-0203d3b7ed65-config-volume" (OuterVolumeSpecName: "config-volume") pod "3963e6bb-dfea-4a47-9765-0203d3b7ed65" (UID: "3963e6bb-dfea-4a47-9765-0203d3b7ed65"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.302254 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3963e6bb-dfea-4a47-9765-0203d3b7ed65-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3963e6bb-dfea-4a47-9765-0203d3b7ed65" (UID: "3963e6bb-dfea-4a47-9765-0203d3b7ed65"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.303924 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3963e6bb-dfea-4a47-9765-0203d3b7ed65-kube-api-access-5mdxv" (OuterVolumeSpecName: "kube-api-access-5mdxv") pod "3963e6bb-dfea-4a47-9765-0203d3b7ed65" (UID: "3963e6bb-dfea-4a47-9765-0203d3b7ed65"). InnerVolumeSpecName "kube-api-access-5mdxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.397068 4930 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3963e6bb-dfea-4a47-9765-0203d3b7ed65-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.397525 4930 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3963e6bb-dfea-4a47-9765-0203d3b7ed65-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.397581 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mdxv\" (UniqueName: \"kubernetes.io/projected/3963e6bb-dfea-4a47-9765-0203d3b7ed65-kube-api-access-5mdxv\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.660377 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" event={"ID":"3963e6bb-dfea-4a47-9765-0203d3b7ed65","Type":"ContainerDied","Data":"4a47f5593a1ca2161c473ff8bf04fe36001c2b8d30a1f4b501827b1d947f8362"} Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.660426 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a47f5593a1ca2161c473ff8bf04fe36001c2b8d30a1f4b501827b1d947f8362" Nov 24 12:15:07 crc kubenswrapper[4930]: I1124 12:15:07.660568 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr" Nov 24 12:15:08 crc kubenswrapper[4930]: I1124 12:15:08.670070 4930 generic.go:334] "Generic (PLEG): container finished" podID="64612891-0a55-4622-8888-d141a949c665" containerID="c0321fbaa0723429c8755f1c827a062e8cf2040985217cb530041da80ea1ea6f" exitCode=0 Nov 24 12:15:08 crc kubenswrapper[4930]: I1124 12:15:08.670154 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"64612891-0a55-4622-8888-d141a949c665","Type":"ContainerDied","Data":"c0321fbaa0723429c8755f1c827a062e8cf2040985217cb530041da80ea1ea6f"} Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.623646 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-fnxs8"] Nov 24 12:15:11 crc kubenswrapper[4930]: E1124 12:15:11.624803 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3963e6bb-dfea-4a47-9765-0203d3b7ed65" containerName="collect-profiles" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.624817 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="3963e6bb-dfea-4a47-9765-0203d3b7ed65" containerName="collect-profiles" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.624985 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="3963e6bb-dfea-4a47-9765-0203d3b7ed65" containerName="collect-profiles" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.625585 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.629809 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.645485 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fnxs8"] Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.670106 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4686e3a-6cd1-4ada-a593-a7cfa2598257-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.670183 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4686e3a-6cd1-4ada-a593-a7cfa2598257-config\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.670261 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4686e3a-6cd1-4ada-a593-a7cfa2598257-combined-ca-bundle\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.670306 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n64hj\" (UniqueName: \"kubernetes.io/projected/b4686e3a-6cd1-4ada-a593-a7cfa2598257-kube-api-access-n64hj\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.670361 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b4686e3a-6cd1-4ada-a593-a7cfa2598257-ovn-rundir\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.670428 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b4686e3a-6cd1-4ada-a593-a7cfa2598257-ovs-rundir\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.701956 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"abae5d96-d4bd-42db-8517-ac6defbb22f2","Type":"ContainerStarted","Data":"924080dcbe043e3b03c71a7f6d01c845e1c0662af7c874f657b1fa7f68c08755"} Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.704080 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r7nwq" event={"ID":"ce96cb2b-064b-4d76-a101-df9f31c86314","Type":"ContainerStarted","Data":"c7a828a497fef61c02cd5af7bc15ce62ece18adfd01d03d7f77f8f0878372611"} Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.705991 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-r7nwq" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.711641 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"64612891-0a55-4622-8888-d141a949c665","Type":"ContainerStarted","Data":"729e3d28b3066d11ce539fe9763a4514822497206969826369534b498acee468"} Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.714755 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bddca103-daee-4f61-9165-1f6ec4762bd1","Type":"ContainerStarted","Data":"b81afd24717691d8e0ae56b5a7fa0b7e4bb2a1b5a19b55ffbae61e4a41671d8e"} Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.720889 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7cce9366-d1b8-46ab-8ceb-05f6b71348f1","Type":"ContainerStarted","Data":"3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b"} Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.722053 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.727035 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5rmd" event={"ID":"47adcfa9-c402-4f40-b558-bb2a56d93293","Type":"ContainerStarted","Data":"4076bd90e4724d81f5e7f35e9b98d58caaded73006dac25bf0127630809549c6"} Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.750699 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3118a4f6-bfb6-4646-a543-2f2dcbf03681","Type":"ContainerStarted","Data":"164971da183f1dd4d886a12a02109bf572ac9115c08395d74da76711abf2460e"} Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.754048 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-r7nwq" podStartSLOduration=17.892006421 podStartE2EDuration="23.754027252s" podCreationTimestamp="2025-11-24 12:14:48 +0000 UTC" firstStartedPulling="2025-11-24 12:15:05.065285509 +0000 UTC m=+951.679613459" lastFinishedPulling="2025-11-24 12:15:10.92730634 +0000 UTC m=+957.541634290" observedRunningTime="2025-11-24 12:15:11.733500852 +0000 UTC m=+958.347828802" watchObservedRunningTime="2025-11-24 12:15:11.754027252 +0000 UTC m=+958.368355202" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.772808 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4686e3a-6cd1-4ada-a593-a7cfa2598257-combined-ca-bundle\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.772905 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n64hj\" (UniqueName: \"kubernetes.io/projected/b4686e3a-6cd1-4ada-a593-a7cfa2598257-kube-api-access-n64hj\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.773016 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b4686e3a-6cd1-4ada-a593-a7cfa2598257-ovn-rundir\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.773147 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b4686e3a-6cd1-4ada-a593-a7cfa2598257-ovs-rundir\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.773317 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4686e3a-6cd1-4ada-a593-a7cfa2598257-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.773372 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4686e3a-6cd1-4ada-a593-a7cfa2598257-config\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.776357 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b4686e3a-6cd1-4ada-a593-a7cfa2598257-ovn-rundir\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.784887 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b4686e3a-6cd1-4ada-a593-a7cfa2598257-ovs-rundir\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.788473 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4686e3a-6cd1-4ada-a593-a7cfa2598257-config\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.789072 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4686e3a-6cd1-4ada-a593-a7cfa2598257-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.799332 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.629908241 podStartE2EDuration="28.799304955s" podCreationTimestamp="2025-11-24 12:14:43 +0000 UTC" firstStartedPulling="2025-11-24 12:15:04.757937137 +0000 UTC m=+951.372265077" lastFinishedPulling="2025-11-24 12:15:10.927333841 +0000 UTC m=+957.541661791" observedRunningTime="2025-11-24 12:15:11.785673353 +0000 UTC m=+958.400001303" watchObservedRunningTime="2025-11-24 12:15:11.799304955 +0000 UTC m=+958.413632905" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.800061 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4686e3a-6cd1-4ada-a593-a7cfa2598257-combined-ca-bundle\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.811781 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n64hj\" (UniqueName: \"kubernetes.io/projected/b4686e3a-6cd1-4ada-a593-a7cfa2598257-kube-api-access-n64hj\") pod \"ovn-controller-metrics-fnxs8\" (UID: \"b4686e3a-6cd1-4ada-a593-a7cfa2598257\") " pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.897351 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-9k44v"] Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.909206 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-dsdwm"] Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.910520 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.000724755 podStartE2EDuration="31.910489193s" podCreationTimestamp="2025-11-24 12:14:40 +0000 UTC" firstStartedPulling="2025-11-24 12:14:42.230041497 +0000 UTC m=+928.844369447" lastFinishedPulling="2025-11-24 12:15:04.139805935 +0000 UTC m=+950.754133885" observedRunningTime="2025-11-24 12:15:11.89717121 +0000 UTC m=+958.511499180" watchObservedRunningTime="2025-11-24 12:15:11.910489193 +0000 UTC m=+958.524817143" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.911231 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.915798 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.918947 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.930408 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-dsdwm"] Nov 24 12:15:11 crc kubenswrapper[4930]: I1124 12:15:11.945438 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fnxs8" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.094689 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.094778 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-config\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.094836 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjsgh\" (UniqueName: \"kubernetes.io/projected/f1e3cf45-39da-444c-87e4-cec4337e0bfe-kube-api-access-zjsgh\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.094932 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.198643 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-config\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.199280 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjsgh\" (UniqueName: \"kubernetes.io/projected/f1e3cf45-39da-444c-87e4-cec4337e0bfe-kube-api-access-zjsgh\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.199625 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.199719 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.207312 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-config\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.221339 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.224860 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.262403 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjsgh\" (UniqueName: \"kubernetes.io/projected/f1e3cf45-39da-444c-87e4-cec4337e0bfe-kube-api-access-zjsgh\") pod \"dnsmasq-dns-6c65c5f57f-dsdwm\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.312132 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.325076 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-42wpq"] Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.377868 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-vpctc"] Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.379322 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.386806 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.422516 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-vpctc"] Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.514356 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.514829 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-config\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.514942 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.515016 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggfzx\" (UniqueName: \"kubernetes.io/projected/444600bc-753f-4156-ba87-5b31d4197d04-kube-api-access-ggfzx\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.515048 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.619044 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.619135 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.619160 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-config\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.619262 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.619337 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggfzx\" (UniqueName: \"kubernetes.io/projected/444600bc-753f-4156-ba87-5b31d4197d04-kube-api-access-ggfzx\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.620694 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.620783 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-config\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.621384 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.621607 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.649128 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggfzx\" (UniqueName: \"kubernetes.io/projected/444600bc-753f-4156-ba87-5b31d4197d04-kube-api-access-ggfzx\") pod \"dnsmasq-dns-5c476d78c5-vpctc\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.684140 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.718750 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.763876 4930 generic.go:334] "Generic (PLEG): container finished" podID="47adcfa9-c402-4f40-b558-bb2a56d93293" containerID="4076bd90e4724d81f5e7f35e9b98d58caaded73006dac25bf0127630809549c6" exitCode=0 Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.763936 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5rmd" event={"ID":"47adcfa9-c402-4f40-b558-bb2a56d93293","Type":"ContainerDied","Data":"4076bd90e4724d81f5e7f35e9b98d58caaded73006dac25bf0127630809549c6"} Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.767140 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d35e6340-889e-4150-90c7-059417befffd","Type":"ContainerStarted","Data":"0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a"} Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.769135 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-9k44v" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.769281 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-9k44v" event={"ID":"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d","Type":"ContainerDied","Data":"b489f1362591ef3303d2b1477bc4e58b9e834cdc7f588ac801e13edc085d4d38"} Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.823276 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-config\") pod \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.823467 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx662\" (UniqueName: \"kubernetes.io/projected/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-kube-api-access-vx662\") pod \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.823550 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-dns-svc\") pod \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\" (UID: \"dbd5e1f6-e854-4370-8ae6-d23fb6fc083d\") " Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.824434 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-config" (OuterVolumeSpecName: "config") pod "dbd5e1f6-e854-4370-8ae6-d23fb6fc083d" (UID: "dbd5e1f6-e854-4370-8ae6-d23fb6fc083d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.825369 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.825894 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbd5e1f6-e854-4370-8ae6-d23fb6fc083d" (UID: "dbd5e1f6-e854-4370-8ae6-d23fb6fc083d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.828878 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-kube-api-access-vx662" (OuterVolumeSpecName: "kube-api-access-vx662") pod "dbd5e1f6-e854-4370-8ae6-d23fb6fc083d" (UID: "dbd5e1f6-e854-4370-8ae6-d23fb6fc083d"). InnerVolumeSpecName "kube-api-access-vx662". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.863936 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fnxs8"] Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.889205 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:15:12 crc kubenswrapper[4930]: W1124 12:15:12.899491 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4686e3a_6cd1_4ada_a593_a7cfa2598257.slice/crio-98e0129ecc5e0a2496ab03dabee85db1755784fad79029e8eaa530929034f8dc WatchSource:0}: Error finding container 98e0129ecc5e0a2496ab03dabee85db1755784fad79029e8eaa530929034f8dc: Status 404 returned error can't find the container with id 98e0129ecc5e0a2496ab03dabee85db1755784fad79029e8eaa530929034f8dc Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.926695 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx662\" (UniqueName: \"kubernetes.io/projected/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-kube-api-access-vx662\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:12 crc kubenswrapper[4930]: I1124 12:15:12.926733 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.028166 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-dns-svc\") pod \"acf2e767-1d50-416b-aa31-16a1a6ee631c\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.028307 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-config\") pod \"acf2e767-1d50-416b-aa31-16a1a6ee631c\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.028390 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jldmt\" (UniqueName: \"kubernetes.io/projected/acf2e767-1d50-416b-aa31-16a1a6ee631c-kube-api-access-jldmt\") pod \"acf2e767-1d50-416b-aa31-16a1a6ee631c\" (UID: \"acf2e767-1d50-416b-aa31-16a1a6ee631c\") " Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.028717 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "acf2e767-1d50-416b-aa31-16a1a6ee631c" (UID: "acf2e767-1d50-416b-aa31-16a1a6ee631c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.028901 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.029259 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-config" (OuterVolumeSpecName: "config") pod "acf2e767-1d50-416b-aa31-16a1a6ee631c" (UID: "acf2e767-1d50-416b-aa31-16a1a6ee631c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.054475 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf2e767-1d50-416b-aa31-16a1a6ee631c-kube-api-access-jldmt" (OuterVolumeSpecName: "kube-api-access-jldmt") pod "acf2e767-1d50-416b-aa31-16a1a6ee631c" (UID: "acf2e767-1d50-416b-aa31-16a1a6ee631c"). InnerVolumeSpecName "kube-api-access-jldmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.055289 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-dsdwm"] Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.142090 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf2e767-1d50-416b-aa31-16a1a6ee631c-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.142424 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jldmt\" (UniqueName: \"kubernetes.io/projected/acf2e767-1d50-416b-aa31-16a1a6ee631c-kube-api-access-jldmt\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.149904 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-9k44v"] Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.155033 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-9k44v"] Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.379326 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-vpctc"] Nov 24 12:15:13 crc kubenswrapper[4930]: W1124 12:15:13.396844 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod444600bc_753f_4156_ba87_5b31d4197d04.slice/crio-c4d41c0bc1bf47eaf6cb00d9668be98f538f12d7162ba382217664d8fd4f85c4 WatchSource:0}: Error finding container c4d41c0bc1bf47eaf6cb00d9668be98f538f12d7162ba382217664d8fd4f85c4: Status 404 returned error can't find the container with id c4d41c0bc1bf47eaf6cb00d9668be98f538f12d7162ba382217664d8fd4f85c4 Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.809165 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-dsdwm"] Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.818314 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" event={"ID":"f1e3cf45-39da-444c-87e4-cec4337e0bfe","Type":"ContainerStarted","Data":"7b6e828fa025127dfa10e144d7f0ba20e039209006dce78ec0f68f966c1b082a"} Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.826253 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fnxs8" event={"ID":"b4686e3a-6cd1-4ada-a593-a7cfa2598257","Type":"ContainerStarted","Data":"98e0129ecc5e0a2496ab03dabee85db1755784fad79029e8eaa530929034f8dc"} Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.854640 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-hsqrp"] Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.856213 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"270a64e1-2837-47ac-860f-d616efdc6bbc","Type":"ContainerStarted","Data":"d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1"} Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.856741 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.876859 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5rmd" event={"ID":"47adcfa9-c402-4f40-b558-bb2a56d93293","Type":"ContainerStarted","Data":"2cb637919bdf1919c4dd19fa9344fd35fb24feaf50668a0b1a122afc2a4c6a38"} Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.882160 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-hsqrp"] Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.891222 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" event={"ID":"444600bc-753f-4156-ba87-5b31d4197d04","Type":"ContainerStarted","Data":"c4d41c0bc1bf47eaf6cb00d9668be98f538f12d7162ba382217664d8fd4f85c4"} Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.915080 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.916648 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8746976c-42wpq" event={"ID":"acf2e767-1d50-416b-aa31-16a1a6ee631c","Type":"ContainerDied","Data":"f7e6c46c74cacc0c2320154631f2ad4b4e373478a1398119eeb737d2f61cbbef"} Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.963360 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.963459 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-config\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.963599 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgql9\" (UniqueName: \"kubernetes.io/projected/5cd25a17-d530-48be-aac4-0011fc6c29f1-kube-api-access-cgql9\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.963660 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:13 crc kubenswrapper[4930]: I1124 12:15:13.963696 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.069011 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-config\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.069105 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgql9\" (UniqueName: \"kubernetes.io/projected/5cd25a17-d530-48be-aac4-0011fc6c29f1-kube-api-access-cgql9\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.069164 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.069192 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.069258 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.070687 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-config\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.072235 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.072951 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.073363 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.108263 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgql9\" (UniqueName: \"kubernetes.io/projected/5cd25a17-d530-48be-aac4-0011fc6c29f1-kube-api-access-cgql9\") pod \"dnsmasq-dns-5c9fdb784c-hsqrp\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.133329 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbd5e1f6-e854-4370-8ae6-d23fb6fc083d" path="/var/lib/kubelet/pods/dbd5e1f6-e854-4370-8ae6-d23fb6fc083d/volumes" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.202033 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.667770 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-hsqrp"] Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.934473 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.943227 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.943345 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.944609 4930 generic.go:334] "Generic (PLEG): container finished" podID="444600bc-753f-4156-ba87-5b31d4197d04" containerID="c22dfe83d528c319a11a76ea33a5d8e628f5bcb11360091d6aa7e383331fd328" exitCode=0 Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.944679 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" event={"ID":"444600bc-753f-4156-ba87-5b31d4197d04","Type":"ContainerDied","Data":"c22dfe83d528c319a11a76ea33a5d8e628f5bcb11360091d6aa7e383331fd328"} Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.949726 4930 generic.go:334] "Generic (PLEG): container finished" podID="f1e3cf45-39da-444c-87e4-cec4337e0bfe" containerID="d9dea5a5a6dc16fac9b33ad7dc7c119e8c300d8f83cdc3b27ccc075a6459372c" exitCode=0 Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.949804 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" event={"ID":"f1e3cf45-39da-444c-87e4-cec4337e0bfe","Type":"ContainerDied","Data":"d9dea5a5a6dc16fac9b33ad7dc7c119e8c300d8f83cdc3b27ccc075a6459372c"} Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.954894 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5rmd" event={"ID":"47adcfa9-c402-4f40-b558-bb2a56d93293","Type":"ContainerStarted","Data":"df40e173d407f9660ef2c68460557f7c0af4e2ef535c7a22e909e74b9be35567"} Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.955638 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.955665 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.956326 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.956334 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.956453 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xwbwj" Nov 24 12:15:14 crc kubenswrapper[4930]: I1124 12:15:14.959660 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.019330 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-q5rmd" podStartSLOduration=21.875565186 podStartE2EDuration="27.019313685s" podCreationTimestamp="2025-11-24 12:14:48 +0000 UTC" firstStartedPulling="2025-11-24 12:15:05.799062227 +0000 UTC m=+952.413390187" lastFinishedPulling="2025-11-24 12:15:10.942810736 +0000 UTC m=+957.557138686" observedRunningTime="2025-11-24 12:15:15.017857783 +0000 UTC m=+961.632185743" watchObservedRunningTime="2025-11-24 12:15:15.019313685 +0000 UTC m=+961.633641635" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.090885 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbv4z\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-kube-api-access-xbv4z\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.093162 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-cache\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.093207 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-lock\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.093329 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.093357 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.197187 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbv4z\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-kube-api-access-xbv4z\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.197336 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-cache\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.197364 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-lock\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.197426 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.197451 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: E1124 12:15:15.197928 4930 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:15:15 crc kubenswrapper[4930]: E1124 12:15:15.197961 4930 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.197983 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-cache\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: E1124 12:15:15.198018 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift podName:cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652 nodeName:}" failed. No retries permitted until 2025-11-24 12:15:15.697996395 +0000 UTC m=+962.312324345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift") pod "swift-storage-0" (UID: "cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652") : configmap "swift-ring-files" not found Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.198265 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.198269 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-lock\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.230064 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbv4z\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-kube-api-access-xbv4z\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.233197 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.468657 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2gmcp"] Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.475104 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.477748 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.477838 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.478021 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.480612 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2gmcp"] Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.603663 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9v9l\" (UniqueName: \"kubernetes.io/projected/066844af-3950-4700-84c4-3c1043ad05e7-kube-api-access-b9v9l\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.604135 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-combined-ca-bundle\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.604236 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-scripts\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.604366 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/066844af-3950-4700-84c4-3c1043ad05e7-etc-swift\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.604413 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-swiftconf\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.604436 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-ring-data-devices\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.604474 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-dispersionconf\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.705983 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-swiftconf\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.706038 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-ring-data-devices\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.706078 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-dispersionconf\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.706134 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9v9l\" (UniqueName: \"kubernetes.io/projected/066844af-3950-4700-84c4-3c1043ad05e7-kube-api-access-b9v9l\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.706187 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-combined-ca-bundle\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.706250 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-scripts\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.706296 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.706335 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/066844af-3950-4700-84c4-3c1043ad05e7-etc-swift\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.706696 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/066844af-3950-4700-84c4-3c1043ad05e7-etc-swift\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.706784 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-ring-data-devices\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: E1124 12:15:15.706846 4930 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:15:15 crc kubenswrapper[4930]: E1124 12:15:15.706877 4930 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:15:15 crc kubenswrapper[4930]: E1124 12:15:15.706935 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift podName:cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652 nodeName:}" failed. No retries permitted until 2025-11-24 12:15:16.706918645 +0000 UTC m=+963.321246595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift") pod "swift-storage-0" (UID: "cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652") : configmap "swift-ring-files" not found Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.707475 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-scripts\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.712289 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-combined-ca-bundle\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.714033 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-dispersionconf\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.724416 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-swiftconf\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.725252 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9v9l\" (UniqueName: \"kubernetes.io/projected/066844af-3950-4700-84c4-3c1043ad05e7-kube-api-access-b9v9l\") pod \"swift-ring-rebalance-2gmcp\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.805599 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.970153 4930 generic.go:334] "Generic (PLEG): container finished" podID="bddca103-daee-4f61-9165-1f6ec4762bd1" containerID="b81afd24717691d8e0ae56b5a7fa0b7e4bb2a1b5a19b55ffbae61e4a41671d8e" exitCode=0 Nov 24 12:15:15 crc kubenswrapper[4930]: I1124 12:15:15.970203 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bddca103-daee-4f61-9165-1f6ec4762bd1","Type":"ContainerDied","Data":"b81afd24717691d8e0ae56b5a7fa0b7e4bb2a1b5a19b55ffbae61e4a41671d8e"} Nov 24 12:15:16 crc kubenswrapper[4930]: W1124 12:15:16.311869 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cd25a17_d530_48be_aac4_0011fc6c29f1.slice/crio-2e00e7dfbcf5955f98d6e6acf8313e280779c0415d09153daa0290213e74654a WatchSource:0}: Error finding container 2e00e7dfbcf5955f98d6e6acf8313e280779c0415d09153daa0290213e74654a: Status 404 returned error can't find the container with id 2e00e7dfbcf5955f98d6e6acf8313e280779c0415d09153daa0290213e74654a Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.388411 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.520931 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-config\") pod \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.520971 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-ovsdbserver-nb\") pod \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.521012 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjsgh\" (UniqueName: \"kubernetes.io/projected/f1e3cf45-39da-444c-87e4-cec4337e0bfe-kube-api-access-zjsgh\") pod \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.521112 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-dns-svc\") pod \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\" (UID: \"f1e3cf45-39da-444c-87e4-cec4337e0bfe\") " Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.524582 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e3cf45-39da-444c-87e4-cec4337e0bfe-kube-api-access-zjsgh" (OuterVolumeSpecName: "kube-api-access-zjsgh") pod "f1e3cf45-39da-444c-87e4-cec4337e0bfe" (UID: "f1e3cf45-39da-444c-87e4-cec4337e0bfe"). InnerVolumeSpecName "kube-api-access-zjsgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.541689 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1e3cf45-39da-444c-87e4-cec4337e0bfe" (UID: "f1e3cf45-39da-444c-87e4-cec4337e0bfe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.547731 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-config" (OuterVolumeSpecName: "config") pod "f1e3cf45-39da-444c-87e4-cec4337e0bfe" (UID: "f1e3cf45-39da-444c-87e4-cec4337e0bfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.562891 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f1e3cf45-39da-444c-87e4-cec4337e0bfe" (UID: "f1e3cf45-39da-444c-87e4-cec4337e0bfe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.623352 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.623383 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.623397 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjsgh\" (UniqueName: \"kubernetes.io/projected/f1e3cf45-39da-444c-87e4-cec4337e0bfe-kube-api-access-zjsgh\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.623409 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1e3cf45-39da-444c-87e4-cec4337e0bfe-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.724748 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:16 crc kubenswrapper[4930]: E1124 12:15:16.725015 4930 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:15:16 crc kubenswrapper[4930]: E1124 12:15:16.725052 4930 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:15:16 crc kubenswrapper[4930]: E1124 12:15:16.725096 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift podName:cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652 nodeName:}" failed. No retries permitted until 2025-11-24 12:15:18.725081404 +0000 UTC m=+965.339409354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift") pod "swift-storage-0" (UID: "cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652") : configmap "swift-ring-files" not found Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.987763 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" event={"ID":"f1e3cf45-39da-444c-87e4-cec4337e0bfe","Type":"ContainerDied","Data":"7b6e828fa025127dfa10e144d7f0ba20e039209006dce78ec0f68f966c1b082a"} Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.987832 4930 scope.go:117] "RemoveContainer" containerID="d9dea5a5a6dc16fac9b33ad7dc7c119e8c300d8f83cdc3b27ccc075a6459372c" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.987783 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-dsdwm" Nov 24 12:15:16 crc kubenswrapper[4930]: I1124 12:15:16.996139 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" event={"ID":"5cd25a17-d530-48be-aac4-0011fc6c29f1","Type":"ContainerStarted","Data":"2e00e7dfbcf5955f98d6e6acf8313e280779c0415d09153daa0290213e74654a"} Nov 24 12:15:17 crc kubenswrapper[4930]: I1124 12:15:17.104001 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-dsdwm"] Nov 24 12:15:17 crc kubenswrapper[4930]: I1124 12:15:17.121132 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-dsdwm"] Nov 24 12:15:17 crc kubenswrapper[4930]: I1124 12:15:17.257083 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2gmcp"] Nov 24 12:15:17 crc kubenswrapper[4930]: W1124 12:15:17.259573 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod066844af_3950_4700_84c4_3c1043ad05e7.slice/crio-ebfdf3d6e3f61e0a3a18af89e7edb83bce1a714228856552d12dcbb9b9c1cb77 WatchSource:0}: Error finding container ebfdf3d6e3f61e0a3a18af89e7edb83bce1a714228856552d12dcbb9b9c1cb77: Status 404 returned error can't find the container with id ebfdf3d6e3f61e0a3a18af89e7edb83bce1a714228856552d12dcbb9b9c1cb77 Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.044456 4930 generic.go:334] "Generic (PLEG): container finished" podID="5cd25a17-d530-48be-aac4-0011fc6c29f1" containerID="1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9" exitCode=0 Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.044523 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" event={"ID":"5cd25a17-d530-48be-aac4-0011fc6c29f1","Type":"ContainerDied","Data":"1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9"} Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.074308 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fnxs8" event={"ID":"b4686e3a-6cd1-4ada-a593-a7cfa2598257","Type":"ContainerStarted","Data":"39339a44aa47116f42753b4a68b2c3bf0838ec9796189a1d21fa855b73ac00cb"} Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.175066 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-fnxs8" podStartSLOduration=3.22030608 podStartE2EDuration="7.175044515s" podCreationTimestamp="2025-11-24 12:15:11 +0000 UTC" firstStartedPulling="2025-11-24 12:15:12.911822279 +0000 UTC m=+959.526150239" lastFinishedPulling="2025-11-24 12:15:16.866560724 +0000 UTC m=+963.480888674" observedRunningTime="2025-11-24 12:15:18.126814657 +0000 UTC m=+964.741142687" watchObservedRunningTime="2025-11-24 12:15:18.175044515 +0000 UTC m=+964.789372465" Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.243604 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.231083044 podStartE2EDuration="32.243583316s" podCreationTimestamp="2025-11-24 12:14:46 +0000 UTC" firstStartedPulling="2025-11-24 12:15:04.831322968 +0000 UTC m=+951.445650918" lastFinishedPulling="2025-11-24 12:15:16.84382324 +0000 UTC m=+963.458151190" observedRunningTime="2025-11-24 12:15:18.197774619 +0000 UTC m=+964.812102569" watchObservedRunningTime="2025-11-24 12:15:18.243583316 +0000 UTC m=+964.857911266" Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.252298 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e3cf45-39da-444c-87e4-cec4337e0bfe" path="/var/lib/kubelet/pods/f1e3cf45-39da-444c-87e4-cec4337e0bfe/volumes" Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.252894 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3118a4f6-bfb6-4646-a543-2f2dcbf03681","Type":"ContainerStarted","Data":"d8ce84d21d18066f60918ae3947f706f65a29edf6a7d806a1be7ae9957315fd5"} Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.252934 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"abae5d96-d4bd-42db-8517-ac6defbb22f2","Type":"ContainerStarted","Data":"19835124fd41478966045cc4be82dfbe723224d44b344f644db90555b615df0d"} Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.252948 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" event={"ID":"444600bc-753f-4156-ba87-5b31d4197d04","Type":"ContainerStarted","Data":"7a8a91b86f790fafd166efec19965cca9baf17c543f737212f69e6eec9ffab6a"} Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.252965 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bddca103-daee-4f61-9165-1f6ec4762bd1","Type":"ContainerStarted","Data":"f6f5762a67ebd9c9d9d0d8774f2fddd36bab82419feff009e705e4167a8d5a6f"} Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.252987 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2gmcp" event={"ID":"066844af-3950-4700-84c4-3c1043ad05e7","Type":"ContainerStarted","Data":"ebfdf3d6e3f61e0a3a18af89e7edb83bce1a714228856552d12dcbb9b9c1cb77"} Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.265203 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.270951 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.596409233 podStartE2EDuration="29.270928003s" podCreationTimestamp="2025-11-24 12:14:49 +0000 UTC" firstStartedPulling="2025-11-24 12:15:05.168381674 +0000 UTC m=+951.782709624" lastFinishedPulling="2025-11-24 12:15:16.842900444 +0000 UTC m=+963.457228394" observedRunningTime="2025-11-24 12:15:18.265932489 +0000 UTC m=+964.880260439" watchObservedRunningTime="2025-11-24 12:15:18.270928003 +0000 UTC m=+964.885255953" Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.307469 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371996.547346 podStartE2EDuration="40.307430593s" podCreationTimestamp="2025-11-24 12:14:38 +0000 UTC" firstStartedPulling="2025-11-24 12:14:40.666070907 +0000 UTC m=+927.280398857" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:18.299458684 +0000 UTC m=+964.913786654" watchObservedRunningTime="2025-11-24 12:15:18.307430593 +0000 UTC m=+964.921758543" Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.327673 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" podStartSLOduration=5.803317501 podStartE2EDuration="6.327654155s" podCreationTimestamp="2025-11-24 12:15:12 +0000 UTC" firstStartedPulling="2025-11-24 12:15:13.400446685 +0000 UTC m=+960.014774635" lastFinishedPulling="2025-11-24 12:15:13.924783339 +0000 UTC m=+960.539111289" observedRunningTime="2025-11-24 12:15:18.321991272 +0000 UTC m=+964.936319222" watchObservedRunningTime="2025-11-24 12:15:18.327654155 +0000 UTC m=+964.941982105" Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.355819 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 12:15:18 crc kubenswrapper[4930]: I1124 12:15:18.816357 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:18 crc kubenswrapper[4930]: E1124 12:15:18.816554 4930 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:15:18 crc kubenswrapper[4930]: E1124 12:15:18.816881 4930 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:15:18 crc kubenswrapper[4930]: E1124 12:15:18.816940 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift podName:cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652 nodeName:}" failed. No retries permitted until 2025-11-24 12:15:22.81692261 +0000 UTC m=+969.431250560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift") pod "swift-storage-0" (UID: "cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652") : configmap "swift-ring-files" not found Nov 24 12:15:19 crc kubenswrapper[4930]: I1124 12:15:19.262591 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" event={"ID":"5cd25a17-d530-48be-aac4-0011fc6c29f1","Type":"ContainerStarted","Data":"8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204"} Nov 24 12:15:19 crc kubenswrapper[4930]: I1124 12:15:19.262733 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:19 crc kubenswrapper[4930]: I1124 12:15:19.263391 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:19 crc kubenswrapper[4930]: I1124 12:15:19.263441 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 12:15:19 crc kubenswrapper[4930]: I1124 12:15:19.282743 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" podStartSLOduration=6.282724 podStartE2EDuration="6.282724s" podCreationTimestamp="2025-11-24 12:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:19.279846617 +0000 UTC m=+965.894174587" watchObservedRunningTime="2025-11-24 12:15:19.282724 +0000 UTC m=+965.897051950" Nov 24 12:15:19 crc kubenswrapper[4930]: I1124 12:15:19.312661 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 12:15:19 crc kubenswrapper[4930]: I1124 12:15:19.945976 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 12:15:19 crc kubenswrapper[4930]: I1124 12:15:19.946340 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 12:15:20 crc kubenswrapper[4930]: I1124 12:15:20.823362 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 12:15:20 crc kubenswrapper[4930]: I1124 12:15:20.861393 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.281181 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2gmcp" event={"ID":"066844af-3950-4700-84c4-3c1043ad05e7","Type":"ContainerStarted","Data":"6ebc089aa0d2610416422e8ea5198379f8d24f72d1cb174d7f2acb2ba30070a3"} Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.281602 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.306516 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2gmcp" podStartSLOduration=3.061615922 podStartE2EDuration="6.306493538s" podCreationTimestamp="2025-11-24 12:15:15 +0000 UTC" firstStartedPulling="2025-11-24 12:15:17.262642789 +0000 UTC m=+963.876970739" lastFinishedPulling="2025-11-24 12:15:20.507520405 +0000 UTC m=+967.121848355" observedRunningTime="2025-11-24 12:15:21.300437714 +0000 UTC m=+967.914765664" watchObservedRunningTime="2025-11-24 12:15:21.306493538 +0000 UTC m=+967.920821488" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.328911 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.483332 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 12:15:21 crc kubenswrapper[4930]: E1124 12:15:21.483720 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e3cf45-39da-444c-87e4-cec4337e0bfe" containerName="init" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.483738 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e3cf45-39da-444c-87e4-cec4337e0bfe" containerName="init" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.483895 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1e3cf45-39da-444c-87e4-cec4337e0bfe" containerName="init" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.484889 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.490933 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.491145 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.491164 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7pckb" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.491361 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.505989 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.548848 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.548906 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.634687 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.673003 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.673097 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.673208 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8vgr\" (UniqueName: \"kubernetes.io/projected/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-kube-api-access-f8vgr\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.673698 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.673756 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-scripts\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.673931 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-config\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.674080 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.775405 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8vgr\" (UniqueName: \"kubernetes.io/projected/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-kube-api-access-f8vgr\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.775485 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.775514 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-scripts\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.775569 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-config\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.775637 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.775727 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.775764 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.777039 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-scripts\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.777149 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-config\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.777425 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.781490 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.781490 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.781670 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:21 crc kubenswrapper[4930]: I1124 12:15:21.805367 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8vgr\" (UniqueName: \"kubernetes.io/projected/6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1-kube-api-access-f8vgr\") pod \"ovn-northd-0\" (UID: \"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1\") " pod="openstack/ovn-northd-0" Nov 24 12:15:22 crc kubenswrapper[4930]: I1124 12:15:22.102607 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 12:15:22 crc kubenswrapper[4930]: I1124 12:15:22.260289 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 12:15:22 crc kubenswrapper[4930]: I1124 12:15:22.359007 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 12:15:22 crc kubenswrapper[4930]: I1124 12:15:22.363063 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 12:15:22 crc kubenswrapper[4930]: I1124 12:15:22.574485 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 12:15:22 crc kubenswrapper[4930]: I1124 12:15:22.719758 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:22 crc kubenswrapper[4930]: I1124 12:15:22.896454 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:22 crc kubenswrapper[4930]: E1124 12:15:22.896729 4930 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:15:22 crc kubenswrapper[4930]: E1124 12:15:22.896765 4930 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:15:22 crc kubenswrapper[4930]: E1124 12:15:22.896832 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift podName:cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652 nodeName:}" failed. No retries permitted until 2025-11-24 12:15:30.896809526 +0000 UTC m=+977.511137476 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift") pod "swift-storage-0" (UID: "cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652") : configmap "swift-ring-files" not found Nov 24 12:15:23 crc kubenswrapper[4930]: I1124 12:15:23.310109 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1","Type":"ContainerStarted","Data":"58da9347c796e04b1d4dfb8dd798b862dcd8d37e414d4b83b85cc9bc8ccc06bb"} Nov 24 12:15:23 crc kubenswrapper[4930]: I1124 12:15:23.836396 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 12:15:24 crc kubenswrapper[4930]: I1124 12:15:24.203689 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:15:24 crc kubenswrapper[4930]: I1124 12:15:24.269554 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-vpctc"] Nov 24 12:15:24 crc kubenswrapper[4930]: I1124 12:15:24.269825 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" podUID="444600bc-753f-4156-ba87-5b31d4197d04" containerName="dnsmasq-dns" containerID="cri-o://7a8a91b86f790fafd166efec19965cca9baf17c543f737212f69e6eec9ffab6a" gracePeriod=10 Nov 24 12:15:27 crc kubenswrapper[4930]: I1124 12:15:27.719882 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" podUID="444600bc-753f-4156-ba87-5b31d4197d04" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Nov 24 12:15:28 crc kubenswrapper[4930]: I1124 12:15:28.861850 4930 generic.go:334] "Generic (PLEG): container finished" podID="444600bc-753f-4156-ba87-5b31d4197d04" containerID="7a8a91b86f790fafd166efec19965cca9baf17c543f737212f69e6eec9ffab6a" exitCode=0 Nov 24 12:15:28 crc kubenswrapper[4930]: I1124 12:15:28.861902 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" event={"ID":"444600bc-753f-4156-ba87-5b31d4197d04","Type":"ContainerDied","Data":"7a8a91b86f790fafd166efec19965cca9baf17c543f737212f69e6eec9ffab6a"} Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.165706 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.360372 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-nb\") pod \"444600bc-753f-4156-ba87-5b31d4197d04\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.360829 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggfzx\" (UniqueName: \"kubernetes.io/projected/444600bc-753f-4156-ba87-5b31d4197d04-kube-api-access-ggfzx\") pod \"444600bc-753f-4156-ba87-5b31d4197d04\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.360895 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-dns-svc\") pod \"444600bc-753f-4156-ba87-5b31d4197d04\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.360918 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-config\") pod \"444600bc-753f-4156-ba87-5b31d4197d04\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.360950 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-sb\") pod \"444600bc-753f-4156-ba87-5b31d4197d04\" (UID: \"444600bc-753f-4156-ba87-5b31d4197d04\") " Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.367013 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444600bc-753f-4156-ba87-5b31d4197d04-kube-api-access-ggfzx" (OuterVolumeSpecName: "kube-api-access-ggfzx") pod "444600bc-753f-4156-ba87-5b31d4197d04" (UID: "444600bc-753f-4156-ba87-5b31d4197d04"). InnerVolumeSpecName "kube-api-access-ggfzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.402710 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "444600bc-753f-4156-ba87-5b31d4197d04" (UID: "444600bc-753f-4156-ba87-5b31d4197d04"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.404807 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "444600bc-753f-4156-ba87-5b31d4197d04" (UID: "444600bc-753f-4156-ba87-5b31d4197d04"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.416698 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-config" (OuterVolumeSpecName: "config") pod "444600bc-753f-4156-ba87-5b31d4197d04" (UID: "444600bc-753f-4156-ba87-5b31d4197d04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.417071 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "444600bc-753f-4156-ba87-5b31d4197d04" (UID: "444600bc-753f-4156-ba87-5b31d4197d04"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.462816 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggfzx\" (UniqueName: \"kubernetes.io/projected/444600bc-753f-4156-ba87-5b31d4197d04-kube-api-access-ggfzx\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.462854 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.462870 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.462882 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.462892 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/444600bc-753f-4156-ba87-5b31d4197d04-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.879233 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.879202 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-vpctc" event={"ID":"444600bc-753f-4156-ba87-5b31d4197d04","Type":"ContainerDied","Data":"c4d41c0bc1bf47eaf6cb00d9668be98f538f12d7162ba382217664d8fd4f85c4"} Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.879332 4930 scope.go:117] "RemoveContainer" containerID="7a8a91b86f790fafd166efec19965cca9baf17c543f737212f69e6eec9ffab6a" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.882771 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1","Type":"ContainerStarted","Data":"6cb4f25ea30e8b969b62758efe2d673483cf8785db68a14fce142aa3fe14d9f7"} Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.882932 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.882957 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1","Type":"ContainerStarted","Data":"ac1a9dcfaf40ee04ed4993cdbeb6475cb2e0498db247157a481ef261acfb4d3c"} Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.912413 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.497908049 podStartE2EDuration="9.912395262s" podCreationTimestamp="2025-11-24 12:15:21 +0000 UTC" firstStartedPulling="2025-11-24 12:15:22.576339918 +0000 UTC m=+969.190667868" lastFinishedPulling="2025-11-24 12:15:29.990827131 +0000 UTC m=+976.605155081" observedRunningTime="2025-11-24 12:15:30.902802496 +0000 UTC m=+977.517130446" watchObservedRunningTime="2025-11-24 12:15:30.912395262 +0000 UTC m=+977.526723212" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.913677 4930 scope.go:117] "RemoveContainer" containerID="c22dfe83d528c319a11a76ea33a5d8e628f5bcb11360091d6aa7e383331fd328" Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.935547 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-vpctc"] Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.943685 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-vpctc"] Nov 24 12:15:30 crc kubenswrapper[4930]: I1124 12:15:30.975742 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:30 crc kubenswrapper[4930]: E1124 12:15:30.976275 4930 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:15:30 crc kubenswrapper[4930]: E1124 12:15:30.976308 4930 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:15:30 crc kubenswrapper[4930]: E1124 12:15:30.976423 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift podName:cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652 nodeName:}" failed. No retries permitted until 2025-11-24 12:15:46.976376172 +0000 UTC m=+993.590704172 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift") pod "swift-storage-0" (UID: "cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652") : configmap "swift-ring-files" not found Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.425481 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e735-account-create-rgxch"] Nov 24 12:15:31 crc kubenswrapper[4930]: E1124 12:15:31.426314 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444600bc-753f-4156-ba87-5b31d4197d04" containerName="dnsmasq-dns" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.426331 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="444600bc-753f-4156-ba87-5b31d4197d04" containerName="dnsmasq-dns" Nov 24 12:15:31 crc kubenswrapper[4930]: E1124 12:15:31.426363 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444600bc-753f-4156-ba87-5b31d4197d04" containerName="init" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.426370 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="444600bc-753f-4156-ba87-5b31d4197d04" containerName="init" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.426572 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="444600bc-753f-4156-ba87-5b31d4197d04" containerName="dnsmasq-dns" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.427288 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e735-account-create-rgxch" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.429483 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.438192 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e735-account-create-rgxch"] Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.466030 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4wqjp"] Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.467159 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4wqjp" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.493313 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4wqjp"] Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.585978 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f798dc87-b597-474d-a8f3-5a46781865cd-operator-scripts\") pod \"keystone-db-create-4wqjp\" (UID: \"f798dc87-b597-474d-a8f3-5a46781865cd\") " pod="openstack/keystone-db-create-4wqjp" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.586218 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp8ph\" (UniqueName: \"kubernetes.io/projected/f798dc87-b597-474d-a8f3-5a46781865cd-kube-api-access-zp8ph\") pod \"keystone-db-create-4wqjp\" (UID: \"f798dc87-b597-474d-a8f3-5a46781865cd\") " pod="openstack/keystone-db-create-4wqjp" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.586470 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b0739a-ce35-40bb-929e-38d59642bd43-operator-scripts\") pod \"keystone-e735-account-create-rgxch\" (UID: \"c8b0739a-ce35-40bb-929e-38d59642bd43\") " pod="openstack/keystone-e735-account-create-rgxch" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.586562 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx2mn\" (UniqueName: \"kubernetes.io/projected/c8b0739a-ce35-40bb-929e-38d59642bd43-kube-api-access-gx2mn\") pod \"keystone-e735-account-create-rgxch\" (UID: \"c8b0739a-ce35-40bb-929e-38d59642bd43\") " pod="openstack/keystone-e735-account-create-rgxch" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.689804 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b0739a-ce35-40bb-929e-38d59642bd43-operator-scripts\") pod \"keystone-e735-account-create-rgxch\" (UID: \"c8b0739a-ce35-40bb-929e-38d59642bd43\") " pod="openstack/keystone-e735-account-create-rgxch" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.689885 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx2mn\" (UniqueName: \"kubernetes.io/projected/c8b0739a-ce35-40bb-929e-38d59642bd43-kube-api-access-gx2mn\") pod \"keystone-e735-account-create-rgxch\" (UID: \"c8b0739a-ce35-40bb-929e-38d59642bd43\") " pod="openstack/keystone-e735-account-create-rgxch" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.689948 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f798dc87-b597-474d-a8f3-5a46781865cd-operator-scripts\") pod \"keystone-db-create-4wqjp\" (UID: \"f798dc87-b597-474d-a8f3-5a46781865cd\") " pod="openstack/keystone-db-create-4wqjp" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.690042 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp8ph\" (UniqueName: \"kubernetes.io/projected/f798dc87-b597-474d-a8f3-5a46781865cd-kube-api-access-zp8ph\") pod \"keystone-db-create-4wqjp\" (UID: \"f798dc87-b597-474d-a8f3-5a46781865cd\") " pod="openstack/keystone-db-create-4wqjp" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.690623 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b0739a-ce35-40bb-929e-38d59642bd43-operator-scripts\") pod \"keystone-e735-account-create-rgxch\" (UID: \"c8b0739a-ce35-40bb-929e-38d59642bd43\") " pod="openstack/keystone-e735-account-create-rgxch" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.690799 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f798dc87-b597-474d-a8f3-5a46781865cd-operator-scripts\") pod \"keystone-db-create-4wqjp\" (UID: \"f798dc87-b597-474d-a8f3-5a46781865cd\") " pod="openstack/keystone-db-create-4wqjp" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.691900 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-k9s8c"] Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.693302 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k9s8c" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.710828 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx2mn\" (UniqueName: \"kubernetes.io/projected/c8b0739a-ce35-40bb-929e-38d59642bd43-kube-api-access-gx2mn\") pod \"keystone-e735-account-create-rgxch\" (UID: \"c8b0739a-ce35-40bb-929e-38d59642bd43\") " pod="openstack/keystone-e735-account-create-rgxch" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.757698 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e735-account-create-rgxch" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.758308 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp8ph\" (UniqueName: \"kubernetes.io/projected/f798dc87-b597-474d-a8f3-5a46781865cd-kube-api-access-zp8ph\") pod \"keystone-db-create-4wqjp\" (UID: \"f798dc87-b597-474d-a8f3-5a46781865cd\") " pod="openstack/keystone-db-create-4wqjp" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.762263 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k9s8c"] Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.786167 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4wqjp" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.791217 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30dab223-0b89-4e97-a40d-6913ffa6e8b4-operator-scripts\") pod \"placement-db-create-k9s8c\" (UID: \"30dab223-0b89-4e97-a40d-6913ffa6e8b4\") " pod="openstack/placement-db-create-k9s8c" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.791283 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k64vk\" (UniqueName: \"kubernetes.io/projected/30dab223-0b89-4e97-a40d-6913ffa6e8b4-kube-api-access-k64vk\") pod \"placement-db-create-k9s8c\" (UID: \"30dab223-0b89-4e97-a40d-6913ffa6e8b4\") " pod="openstack/placement-db-create-k9s8c" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.910055 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30dab223-0b89-4e97-a40d-6913ffa6e8b4-operator-scripts\") pod \"placement-db-create-k9s8c\" (UID: \"30dab223-0b89-4e97-a40d-6913ffa6e8b4\") " pod="openstack/placement-db-create-k9s8c" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.910467 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k64vk\" (UniqueName: \"kubernetes.io/projected/30dab223-0b89-4e97-a40d-6913ffa6e8b4-kube-api-access-k64vk\") pod \"placement-db-create-k9s8c\" (UID: \"30dab223-0b89-4e97-a40d-6913ffa6e8b4\") " pod="openstack/placement-db-create-k9s8c" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.911491 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30dab223-0b89-4e97-a40d-6913ffa6e8b4-operator-scripts\") pod \"placement-db-create-k9s8c\" (UID: \"30dab223-0b89-4e97-a40d-6913ffa6e8b4\") " pod="openstack/placement-db-create-k9s8c" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.926506 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a730-account-create-spt82"] Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.927690 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a730-account-create-spt82" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.930692 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k64vk\" (UniqueName: \"kubernetes.io/projected/30dab223-0b89-4e97-a40d-6913ffa6e8b4-kube-api-access-k64vk\") pod \"placement-db-create-k9s8c\" (UID: \"30dab223-0b89-4e97-a40d-6913ffa6e8b4\") " pod="openstack/placement-db-create-k9s8c" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.930844 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 12:15:31 crc kubenswrapper[4930]: I1124 12:15:31.934695 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a730-account-create-spt82"] Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.012042 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545768db-9e2f-48e9-92a8-7eaa401eb0b0-operator-scripts\") pod \"placement-a730-account-create-spt82\" (UID: \"545768db-9e2f-48e9-92a8-7eaa401eb0b0\") " pod="openstack/placement-a730-account-create-spt82" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.012181 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxtdn\" (UniqueName: \"kubernetes.io/projected/545768db-9e2f-48e9-92a8-7eaa401eb0b0-kube-api-access-kxtdn\") pod \"placement-a730-account-create-spt82\" (UID: \"545768db-9e2f-48e9-92a8-7eaa401eb0b0\") " pod="openstack/placement-a730-account-create-spt82" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.096062 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444600bc-753f-4156-ba87-5b31d4197d04" path="/var/lib/kubelet/pods/444600bc-753f-4156-ba87-5b31d4197d04/volumes" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.114668 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545768db-9e2f-48e9-92a8-7eaa401eb0b0-operator-scripts\") pod \"placement-a730-account-create-spt82\" (UID: \"545768db-9e2f-48e9-92a8-7eaa401eb0b0\") " pod="openstack/placement-a730-account-create-spt82" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.114842 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxtdn\" (UniqueName: \"kubernetes.io/projected/545768db-9e2f-48e9-92a8-7eaa401eb0b0-kube-api-access-kxtdn\") pod \"placement-a730-account-create-spt82\" (UID: \"545768db-9e2f-48e9-92a8-7eaa401eb0b0\") " pod="openstack/placement-a730-account-create-spt82" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.115428 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545768db-9e2f-48e9-92a8-7eaa401eb0b0-operator-scripts\") pod \"placement-a730-account-create-spt82\" (UID: \"545768db-9e2f-48e9-92a8-7eaa401eb0b0\") " pod="openstack/placement-a730-account-create-spt82" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.144342 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxtdn\" (UniqueName: \"kubernetes.io/projected/545768db-9e2f-48e9-92a8-7eaa401eb0b0-kube-api-access-kxtdn\") pod \"placement-a730-account-create-spt82\" (UID: \"545768db-9e2f-48e9-92a8-7eaa401eb0b0\") " pod="openstack/placement-a730-account-create-spt82" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.189198 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k9s8c" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.250906 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4wqjp"] Nov 24 12:15:32 crc kubenswrapper[4930]: W1124 12:15:32.261005 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf798dc87_b597_474d_a8f3_5a46781865cd.slice/crio-2193801b6b681e74386287f0fdf0562eb6e378d91ab11615203cc3e27c22807e WatchSource:0}: Error finding container 2193801b6b681e74386287f0fdf0562eb6e378d91ab11615203cc3e27c22807e: Status 404 returned error can't find the container with id 2193801b6b681e74386287f0fdf0562eb6e378d91ab11615203cc3e27c22807e Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.266485 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e735-account-create-rgxch"] Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.294921 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a730-account-create-spt82" Nov 24 12:15:32 crc kubenswrapper[4930]: W1124 12:15:32.321036 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8b0739a_ce35_40bb_929e_38d59642bd43.slice/crio-cdcd496703357aff68c96965023c4a12201e22e2f2c92f34a87beae49fcc433d WatchSource:0}: Error finding container cdcd496703357aff68c96965023c4a12201e22e2f2c92f34a87beae49fcc433d: Status 404 returned error can't find the container with id cdcd496703357aff68c96965023c4a12201e22e2f2c92f34a87beae49fcc433d Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.322227 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rrwzz"] Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.325977 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rrwzz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.337319 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rrwzz"] Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.463687 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-04b0-account-create-4dtrz"] Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.466349 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-04b0-account-create-4dtrz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.469866 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.472926 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-04b0-account-create-4dtrz"] Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.527877 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4tg\" (UniqueName: \"kubernetes.io/projected/2ec383ee-4477-4b17-be08-b1bdcea73a7f-kube-api-access-kn4tg\") pod \"glance-db-create-rrwzz\" (UID: \"2ec383ee-4477-4b17-be08-b1bdcea73a7f\") " pod="openstack/glance-db-create-rrwzz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.527999 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec383ee-4477-4b17-be08-b1bdcea73a7f-operator-scripts\") pod \"glance-db-create-rrwzz\" (UID: \"2ec383ee-4477-4b17-be08-b1bdcea73a7f\") " pod="openstack/glance-db-create-rrwzz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.528094 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-operator-scripts\") pod \"glance-04b0-account-create-4dtrz\" (UID: \"73c9eec6-bdfe-4456-a0ca-37c205ac5cba\") " pod="openstack/glance-04b0-account-create-4dtrz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.528132 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2pj\" (UniqueName: \"kubernetes.io/projected/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-kube-api-access-rr2pj\") pod \"glance-04b0-account-create-4dtrz\" (UID: \"73c9eec6-bdfe-4456-a0ca-37c205ac5cba\") " pod="openstack/glance-04b0-account-create-4dtrz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.629228 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-operator-scripts\") pod \"glance-04b0-account-create-4dtrz\" (UID: \"73c9eec6-bdfe-4456-a0ca-37c205ac5cba\") " pod="openstack/glance-04b0-account-create-4dtrz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.629274 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr2pj\" (UniqueName: \"kubernetes.io/projected/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-kube-api-access-rr2pj\") pod \"glance-04b0-account-create-4dtrz\" (UID: \"73c9eec6-bdfe-4456-a0ca-37c205ac5cba\") " pod="openstack/glance-04b0-account-create-4dtrz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.629301 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn4tg\" (UniqueName: \"kubernetes.io/projected/2ec383ee-4477-4b17-be08-b1bdcea73a7f-kube-api-access-kn4tg\") pod \"glance-db-create-rrwzz\" (UID: \"2ec383ee-4477-4b17-be08-b1bdcea73a7f\") " pod="openstack/glance-db-create-rrwzz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.629370 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec383ee-4477-4b17-be08-b1bdcea73a7f-operator-scripts\") pod \"glance-db-create-rrwzz\" (UID: \"2ec383ee-4477-4b17-be08-b1bdcea73a7f\") " pod="openstack/glance-db-create-rrwzz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.630152 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec383ee-4477-4b17-be08-b1bdcea73a7f-operator-scripts\") pod \"glance-db-create-rrwzz\" (UID: \"2ec383ee-4477-4b17-be08-b1bdcea73a7f\") " pod="openstack/glance-db-create-rrwzz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.630178 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-operator-scripts\") pod \"glance-04b0-account-create-4dtrz\" (UID: \"73c9eec6-bdfe-4456-a0ca-37c205ac5cba\") " pod="openstack/glance-04b0-account-create-4dtrz" Nov 24 12:15:32 crc kubenswrapper[4930]: W1124 12:15:32.648223 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30dab223_0b89_4e97_a40d_6913ffa6e8b4.slice/crio-d1c96f6ce4f553f5f5796af894f9bce71846b3d524a49c89d50779566464de2b WatchSource:0}: Error finding container d1c96f6ce4f553f5f5796af894f9bce71846b3d524a49c89d50779566464de2b: Status 404 returned error can't find the container with id d1c96f6ce4f553f5f5796af894f9bce71846b3d524a49c89d50779566464de2b Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.648994 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn4tg\" (UniqueName: \"kubernetes.io/projected/2ec383ee-4477-4b17-be08-b1bdcea73a7f-kube-api-access-kn4tg\") pod \"glance-db-create-rrwzz\" (UID: \"2ec383ee-4477-4b17-be08-b1bdcea73a7f\") " pod="openstack/glance-db-create-rrwzz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.649900 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k9s8c"] Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.649950 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr2pj\" (UniqueName: \"kubernetes.io/projected/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-kube-api-access-rr2pj\") pod \"glance-04b0-account-create-4dtrz\" (UID: \"73c9eec6-bdfe-4456-a0ca-37c205ac5cba\") " pod="openstack/glance-04b0-account-create-4dtrz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.683259 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rrwzz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.809890 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-04b0-account-create-4dtrz" Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.856075 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a730-account-create-spt82"] Nov 24 12:15:32 crc kubenswrapper[4930]: W1124 12:15:32.867722 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod545768db_9e2f_48e9_92a8_7eaa401eb0b0.slice/crio-927f018d4889410949661501cc47c7004b5145774bf78783a563dc77852c5d75 WatchSource:0}: Error finding container 927f018d4889410949661501cc47c7004b5145774bf78783a563dc77852c5d75: Status 404 returned error can't find the container with id 927f018d4889410949661501cc47c7004b5145774bf78783a563dc77852c5d75 Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.942751 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e735-account-create-rgxch" event={"ID":"c8b0739a-ce35-40bb-929e-38d59642bd43","Type":"ContainerDied","Data":"671a7abe5251b85867ed9cf8e414f61712079a702c095b47297f47205229c56e"} Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.943524 4930 generic.go:334] "Generic (PLEG): container finished" podID="c8b0739a-ce35-40bb-929e-38d59642bd43" containerID="671a7abe5251b85867ed9cf8e414f61712079a702c095b47297f47205229c56e" exitCode=0 Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.943702 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e735-account-create-rgxch" event={"ID":"c8b0739a-ce35-40bb-929e-38d59642bd43","Type":"ContainerStarted","Data":"cdcd496703357aff68c96965023c4a12201e22e2f2c92f34a87beae49fcc433d"} Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.945843 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a730-account-create-spt82" event={"ID":"545768db-9e2f-48e9-92a8-7eaa401eb0b0","Type":"ContainerStarted","Data":"927f018d4889410949661501cc47c7004b5145774bf78783a563dc77852c5d75"} Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.948794 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k9s8c" event={"ID":"30dab223-0b89-4e97-a40d-6913ffa6e8b4","Type":"ContainerStarted","Data":"64e59a1906323723c9d02214b6dbe080d7104df19eea71f3ba917c849c99ea78"} Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.948851 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k9s8c" event={"ID":"30dab223-0b89-4e97-a40d-6913ffa6e8b4","Type":"ContainerStarted","Data":"d1c96f6ce4f553f5f5796af894f9bce71846b3d524a49c89d50779566464de2b"} Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.950987 4930 generic.go:334] "Generic (PLEG): container finished" podID="f798dc87-b597-474d-a8f3-5a46781865cd" containerID="df753abe40f242dfd100eb9e49188ae94fac33ec2580c4494207b8afa715642f" exitCode=0 Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.951446 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4wqjp" event={"ID":"f798dc87-b597-474d-a8f3-5a46781865cd","Type":"ContainerDied","Data":"df753abe40f242dfd100eb9e49188ae94fac33ec2580c4494207b8afa715642f"} Nov 24 12:15:32 crc kubenswrapper[4930]: I1124 12:15:32.951477 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4wqjp" event={"ID":"f798dc87-b597-474d-a8f3-5a46781865cd","Type":"ContainerStarted","Data":"2193801b6b681e74386287f0fdf0562eb6e378d91ab11615203cc3e27c22807e"} Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.023908 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-k9s8c" podStartSLOduration=2.023875572 podStartE2EDuration="2.023875572s" podCreationTimestamp="2025-11-24 12:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:33.007644505 +0000 UTC m=+979.621972455" watchObservedRunningTime="2025-11-24 12:15:33.023875572 +0000 UTC m=+979.638203522" Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.136378 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rrwzz"] Nov 24 12:15:33 crc kubenswrapper[4930]: W1124 12:15:33.143883 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ec383ee_4477_4b17_be08_b1bdcea73a7f.slice/crio-0a52f86c1cc9a490392bdbfdb8fba287c14ce574f195197d5e6551a19524c93a WatchSource:0}: Error finding container 0a52f86c1cc9a490392bdbfdb8fba287c14ce574f195197d5e6551a19524c93a: Status 404 returned error can't find the container with id 0a52f86c1cc9a490392bdbfdb8fba287c14ce574f195197d5e6551a19524c93a Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.269157 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-04b0-account-create-4dtrz"] Nov 24 12:15:33 crc kubenswrapper[4930]: W1124 12:15:33.309857 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c9eec6_bdfe_4456_a0ca_37c205ac5cba.slice/crio-aa829912bc10dd3f1b4f2e1ef27015b59791edcd6d8ca3d5f3bbd796166df3de WatchSource:0}: Error finding container aa829912bc10dd3f1b4f2e1ef27015b59791edcd6d8ca3d5f3bbd796166df3de: Status 404 returned error can't find the container with id aa829912bc10dd3f1b4f2e1ef27015b59791edcd6d8ca3d5f3bbd796166df3de Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.960449 4930 generic.go:334] "Generic (PLEG): container finished" podID="066844af-3950-4700-84c4-3c1043ad05e7" containerID="6ebc089aa0d2610416422e8ea5198379f8d24f72d1cb174d7f2acb2ba30070a3" exitCode=0 Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.960590 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2gmcp" event={"ID":"066844af-3950-4700-84c4-3c1043ad05e7","Type":"ContainerDied","Data":"6ebc089aa0d2610416422e8ea5198379f8d24f72d1cb174d7f2acb2ba30070a3"} Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.964211 4930 generic.go:334] "Generic (PLEG): container finished" podID="73c9eec6-bdfe-4456-a0ca-37c205ac5cba" containerID="e9d4e9596371f60129b9c619833e145b0af4900738eb90a358bd58b1a1a004d8" exitCode=0 Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.964276 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-04b0-account-create-4dtrz" event={"ID":"73c9eec6-bdfe-4456-a0ca-37c205ac5cba","Type":"ContainerDied","Data":"e9d4e9596371f60129b9c619833e145b0af4900738eb90a358bd58b1a1a004d8"} Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.964388 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-04b0-account-create-4dtrz" event={"ID":"73c9eec6-bdfe-4456-a0ca-37c205ac5cba","Type":"ContainerStarted","Data":"aa829912bc10dd3f1b4f2e1ef27015b59791edcd6d8ca3d5f3bbd796166df3de"} Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.965996 4930 generic.go:334] "Generic (PLEG): container finished" podID="545768db-9e2f-48e9-92a8-7eaa401eb0b0" containerID="63ab48553dc6e035e615f1745def12f81e794331a4b2bed7e0ca19e4596f8ab6" exitCode=0 Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.966058 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a730-account-create-spt82" event={"ID":"545768db-9e2f-48e9-92a8-7eaa401eb0b0","Type":"ContainerDied","Data":"63ab48553dc6e035e615f1745def12f81e794331a4b2bed7e0ca19e4596f8ab6"} Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.967528 4930 generic.go:334] "Generic (PLEG): container finished" podID="2ec383ee-4477-4b17-be08-b1bdcea73a7f" containerID="140a36fa0161f5c54adac070017088ccac6d36708059104c64c420984c39628a" exitCode=0 Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.967620 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rrwzz" event={"ID":"2ec383ee-4477-4b17-be08-b1bdcea73a7f","Type":"ContainerDied","Data":"140a36fa0161f5c54adac070017088ccac6d36708059104c64c420984c39628a"} Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.967648 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rrwzz" event={"ID":"2ec383ee-4477-4b17-be08-b1bdcea73a7f","Type":"ContainerStarted","Data":"0a52f86c1cc9a490392bdbfdb8fba287c14ce574f195197d5e6551a19524c93a"} Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.968829 4930 generic.go:334] "Generic (PLEG): container finished" podID="30dab223-0b89-4e97-a40d-6913ffa6e8b4" containerID="64e59a1906323723c9d02214b6dbe080d7104df19eea71f3ba917c849c99ea78" exitCode=0 Nov 24 12:15:33 crc kubenswrapper[4930]: I1124 12:15:33.968909 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k9s8c" event={"ID":"30dab223-0b89-4e97-a40d-6913ffa6e8b4","Type":"ContainerDied","Data":"64e59a1906323723c9d02214b6dbe080d7104df19eea71f3ba917c849c99ea78"} Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.274101 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4wqjp" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.386369 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e735-account-create-rgxch" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.457137 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp8ph\" (UniqueName: \"kubernetes.io/projected/f798dc87-b597-474d-a8f3-5a46781865cd-kube-api-access-zp8ph\") pod \"f798dc87-b597-474d-a8f3-5a46781865cd\" (UID: \"f798dc87-b597-474d-a8f3-5a46781865cd\") " Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.457454 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f798dc87-b597-474d-a8f3-5a46781865cd-operator-scripts\") pod \"f798dc87-b597-474d-a8f3-5a46781865cd\" (UID: \"f798dc87-b597-474d-a8f3-5a46781865cd\") " Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.458152 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f798dc87-b597-474d-a8f3-5a46781865cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f798dc87-b597-474d-a8f3-5a46781865cd" (UID: "f798dc87-b597-474d-a8f3-5a46781865cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.463965 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f798dc87-b597-474d-a8f3-5a46781865cd-kube-api-access-zp8ph" (OuterVolumeSpecName: "kube-api-access-zp8ph") pod "f798dc87-b597-474d-a8f3-5a46781865cd" (UID: "f798dc87-b597-474d-a8f3-5a46781865cd"). InnerVolumeSpecName "kube-api-access-zp8ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.558574 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx2mn\" (UniqueName: \"kubernetes.io/projected/c8b0739a-ce35-40bb-929e-38d59642bd43-kube-api-access-gx2mn\") pod \"c8b0739a-ce35-40bb-929e-38d59642bd43\" (UID: \"c8b0739a-ce35-40bb-929e-38d59642bd43\") " Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.558694 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b0739a-ce35-40bb-929e-38d59642bd43-operator-scripts\") pod \"c8b0739a-ce35-40bb-929e-38d59642bd43\" (UID: \"c8b0739a-ce35-40bb-929e-38d59642bd43\") " Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.559001 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f798dc87-b597-474d-a8f3-5a46781865cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.559017 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp8ph\" (UniqueName: \"kubernetes.io/projected/f798dc87-b597-474d-a8f3-5a46781865cd-kube-api-access-zp8ph\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.559148 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b0739a-ce35-40bb-929e-38d59642bd43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8b0739a-ce35-40bb-929e-38d59642bd43" (UID: "c8b0739a-ce35-40bb-929e-38d59642bd43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.561765 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b0739a-ce35-40bb-929e-38d59642bd43-kube-api-access-gx2mn" (OuterVolumeSpecName: "kube-api-access-gx2mn") pod "c8b0739a-ce35-40bb-929e-38d59642bd43" (UID: "c8b0739a-ce35-40bb-929e-38d59642bd43"). InnerVolumeSpecName "kube-api-access-gx2mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.661169 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b0739a-ce35-40bb-929e-38d59642bd43-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.661215 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx2mn\" (UniqueName: \"kubernetes.io/projected/c8b0739a-ce35-40bb-929e-38d59642bd43-kube-api-access-gx2mn\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.978839 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4wqjp" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.978838 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4wqjp" event={"ID":"f798dc87-b597-474d-a8f3-5a46781865cd","Type":"ContainerDied","Data":"2193801b6b681e74386287f0fdf0562eb6e378d91ab11615203cc3e27c22807e"} Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.980073 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2193801b6b681e74386287f0fdf0562eb6e378d91ab11615203cc3e27c22807e" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.980656 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e735-account-create-rgxch" event={"ID":"c8b0739a-ce35-40bb-929e-38d59642bd43","Type":"ContainerDied","Data":"cdcd496703357aff68c96965023c4a12201e22e2f2c92f34a87beae49fcc433d"} Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.980697 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcd496703357aff68c96965023c4a12201e22e2f2c92f34a87beae49fcc433d" Nov 24 12:15:34 crc kubenswrapper[4930]: I1124 12:15:34.980740 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e735-account-create-rgxch" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.441615 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k9s8c" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.547570 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a730-account-create-spt82" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.551844 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rrwzz" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.560024 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.567782 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-04b0-account-create-4dtrz" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.574327 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30dab223-0b89-4e97-a40d-6913ffa6e8b4-operator-scripts\") pod \"30dab223-0b89-4e97-a40d-6913ffa6e8b4\" (UID: \"30dab223-0b89-4e97-a40d-6913ffa6e8b4\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.574504 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k64vk\" (UniqueName: \"kubernetes.io/projected/30dab223-0b89-4e97-a40d-6913ffa6e8b4-kube-api-access-k64vk\") pod \"30dab223-0b89-4e97-a40d-6913ffa6e8b4\" (UID: \"30dab223-0b89-4e97-a40d-6913ffa6e8b4\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.575114 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30dab223-0b89-4e97-a40d-6913ffa6e8b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30dab223-0b89-4e97-a40d-6913ffa6e8b4" (UID: "30dab223-0b89-4e97-a40d-6913ffa6e8b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.581754 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30dab223-0b89-4e97-a40d-6913ffa6e8b4-kube-api-access-k64vk" (OuterVolumeSpecName: "kube-api-access-k64vk") pod "30dab223-0b89-4e97-a40d-6913ffa6e8b4" (UID: "30dab223-0b89-4e97-a40d-6913ffa6e8b4"). InnerVolumeSpecName "kube-api-access-k64vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676140 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-swiftconf\") pod \"066844af-3950-4700-84c4-3c1043ad05e7\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676215 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec383ee-4477-4b17-be08-b1bdcea73a7f-operator-scripts\") pod \"2ec383ee-4477-4b17-be08-b1bdcea73a7f\" (UID: \"2ec383ee-4477-4b17-be08-b1bdcea73a7f\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676234 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr2pj\" (UniqueName: \"kubernetes.io/projected/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-kube-api-access-rr2pj\") pod \"73c9eec6-bdfe-4456-a0ca-37c205ac5cba\" (UID: \"73c9eec6-bdfe-4456-a0ca-37c205ac5cba\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676288 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-ring-data-devices\") pod \"066844af-3950-4700-84c4-3c1043ad05e7\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676322 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-combined-ca-bundle\") pod \"066844af-3950-4700-84c4-3c1043ad05e7\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676343 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/066844af-3950-4700-84c4-3c1043ad05e7-etc-swift\") pod \"066844af-3950-4700-84c4-3c1043ad05e7\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676378 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545768db-9e2f-48e9-92a8-7eaa401eb0b0-operator-scripts\") pod \"545768db-9e2f-48e9-92a8-7eaa401eb0b0\" (UID: \"545768db-9e2f-48e9-92a8-7eaa401eb0b0\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676425 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-dispersionconf\") pod \"066844af-3950-4700-84c4-3c1043ad05e7\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676453 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-operator-scripts\") pod \"73c9eec6-bdfe-4456-a0ca-37c205ac5cba\" (UID: \"73c9eec6-bdfe-4456-a0ca-37c205ac5cba\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676493 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxtdn\" (UniqueName: \"kubernetes.io/projected/545768db-9e2f-48e9-92a8-7eaa401eb0b0-kube-api-access-kxtdn\") pod \"545768db-9e2f-48e9-92a8-7eaa401eb0b0\" (UID: \"545768db-9e2f-48e9-92a8-7eaa401eb0b0\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676528 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-scripts\") pod \"066844af-3950-4700-84c4-3c1043ad05e7\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676610 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn4tg\" (UniqueName: \"kubernetes.io/projected/2ec383ee-4477-4b17-be08-b1bdcea73a7f-kube-api-access-kn4tg\") pod \"2ec383ee-4477-4b17-be08-b1bdcea73a7f\" (UID: \"2ec383ee-4477-4b17-be08-b1bdcea73a7f\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676637 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9v9l\" (UniqueName: \"kubernetes.io/projected/066844af-3950-4700-84c4-3c1043ad05e7-kube-api-access-b9v9l\") pod \"066844af-3950-4700-84c4-3c1043ad05e7\" (UID: \"066844af-3950-4700-84c4-3c1043ad05e7\") " Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.676995 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k64vk\" (UniqueName: \"kubernetes.io/projected/30dab223-0b89-4e97-a40d-6913ffa6e8b4-kube-api-access-k64vk\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.677007 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30dab223-0b89-4e97-a40d-6913ffa6e8b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.677288 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73c9eec6-bdfe-4456-a0ca-37c205ac5cba" (UID: "73c9eec6-bdfe-4456-a0ca-37c205ac5cba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.677437 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec383ee-4477-4b17-be08-b1bdcea73a7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ec383ee-4477-4b17-be08-b1bdcea73a7f" (UID: "2ec383ee-4477-4b17-be08-b1bdcea73a7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.677550 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545768db-9e2f-48e9-92a8-7eaa401eb0b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "545768db-9e2f-48e9-92a8-7eaa401eb0b0" (UID: "545768db-9e2f-48e9-92a8-7eaa401eb0b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.678198 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "066844af-3950-4700-84c4-3c1043ad05e7" (UID: "066844af-3950-4700-84c4-3c1043ad05e7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.678439 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/066844af-3950-4700-84c4-3c1043ad05e7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "066844af-3950-4700-84c4-3c1043ad05e7" (UID: "066844af-3950-4700-84c4-3c1043ad05e7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.680757 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-kube-api-access-rr2pj" (OuterVolumeSpecName: "kube-api-access-rr2pj") pod "73c9eec6-bdfe-4456-a0ca-37c205ac5cba" (UID: "73c9eec6-bdfe-4456-a0ca-37c205ac5cba"). InnerVolumeSpecName "kube-api-access-rr2pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.680996 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec383ee-4477-4b17-be08-b1bdcea73a7f-kube-api-access-kn4tg" (OuterVolumeSpecName: "kube-api-access-kn4tg") pod "2ec383ee-4477-4b17-be08-b1bdcea73a7f" (UID: "2ec383ee-4477-4b17-be08-b1bdcea73a7f"). InnerVolumeSpecName "kube-api-access-kn4tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.681076 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066844af-3950-4700-84c4-3c1043ad05e7-kube-api-access-b9v9l" (OuterVolumeSpecName: "kube-api-access-b9v9l") pod "066844af-3950-4700-84c4-3c1043ad05e7" (UID: "066844af-3950-4700-84c4-3c1043ad05e7"). InnerVolumeSpecName "kube-api-access-b9v9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.681293 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545768db-9e2f-48e9-92a8-7eaa401eb0b0-kube-api-access-kxtdn" (OuterVolumeSpecName: "kube-api-access-kxtdn") pod "545768db-9e2f-48e9-92a8-7eaa401eb0b0" (UID: "545768db-9e2f-48e9-92a8-7eaa401eb0b0"). InnerVolumeSpecName "kube-api-access-kxtdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.683276 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "066844af-3950-4700-84c4-3c1043ad05e7" (UID: "066844af-3950-4700-84c4-3c1043ad05e7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.697240 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "066844af-3950-4700-84c4-3c1043ad05e7" (UID: "066844af-3950-4700-84c4-3c1043ad05e7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.699731 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "066844af-3950-4700-84c4-3c1043ad05e7" (UID: "066844af-3950-4700-84c4-3c1043ad05e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.701355 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-scripts" (OuterVolumeSpecName: "scripts") pod "066844af-3950-4700-84c4-3c1043ad05e7" (UID: "066844af-3950-4700-84c4-3c1043ad05e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.778919 4930 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.778960 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.778973 4930 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/066844af-3950-4700-84c4-3c1043ad05e7-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.778985 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545768db-9e2f-48e9-92a8-7eaa401eb0b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.778995 4930 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.779007 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.779019 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxtdn\" (UniqueName: \"kubernetes.io/projected/545768db-9e2f-48e9-92a8-7eaa401eb0b0-kube-api-access-kxtdn\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.779032 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/066844af-3950-4700-84c4-3c1043ad05e7-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.779043 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn4tg\" (UniqueName: \"kubernetes.io/projected/2ec383ee-4477-4b17-be08-b1bdcea73a7f-kube-api-access-kn4tg\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.779054 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9v9l\" (UniqueName: \"kubernetes.io/projected/066844af-3950-4700-84c4-3c1043ad05e7-kube-api-access-b9v9l\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.779065 4930 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/066844af-3950-4700-84c4-3c1043ad05e7-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.779076 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec383ee-4477-4b17-be08-b1bdcea73a7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.779087 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr2pj\" (UniqueName: \"kubernetes.io/projected/73c9eec6-bdfe-4456-a0ca-37c205ac5cba-kube-api-access-rr2pj\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.990816 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rrwzz" event={"ID":"2ec383ee-4477-4b17-be08-b1bdcea73a7f","Type":"ContainerDied","Data":"0a52f86c1cc9a490392bdbfdb8fba287c14ce574f195197d5e6551a19524c93a"} Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.990885 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a52f86c1cc9a490392bdbfdb8fba287c14ce574f195197d5e6551a19524c93a" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.990829 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rrwzz" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.996726 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2gmcp" Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.996755 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2gmcp" event={"ID":"066844af-3950-4700-84c4-3c1043ad05e7","Type":"ContainerDied","Data":"ebfdf3d6e3f61e0a3a18af89e7edb83bce1a714228856552d12dcbb9b9c1cb77"} Nov 24 12:15:35 crc kubenswrapper[4930]: I1124 12:15:35.996819 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebfdf3d6e3f61e0a3a18af89e7edb83bce1a714228856552d12dcbb9b9c1cb77" Nov 24 12:15:36 crc kubenswrapper[4930]: I1124 12:15:36.002080 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-04b0-account-create-4dtrz" event={"ID":"73c9eec6-bdfe-4456-a0ca-37c205ac5cba","Type":"ContainerDied","Data":"aa829912bc10dd3f1b4f2e1ef27015b59791edcd6d8ca3d5f3bbd796166df3de"} Nov 24 12:15:36 crc kubenswrapper[4930]: I1124 12:15:36.002143 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-04b0-account-create-4dtrz" Nov 24 12:15:36 crc kubenswrapper[4930]: I1124 12:15:36.002144 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa829912bc10dd3f1b4f2e1ef27015b59791edcd6d8ca3d5f3bbd796166df3de" Nov 24 12:15:36 crc kubenswrapper[4930]: I1124 12:15:36.004337 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a730-account-create-spt82" event={"ID":"545768db-9e2f-48e9-92a8-7eaa401eb0b0","Type":"ContainerDied","Data":"927f018d4889410949661501cc47c7004b5145774bf78783a563dc77852c5d75"} Nov 24 12:15:36 crc kubenswrapper[4930]: I1124 12:15:36.004382 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="927f018d4889410949661501cc47c7004b5145774bf78783a563dc77852c5d75" Nov 24 12:15:36 crc kubenswrapper[4930]: I1124 12:15:36.004442 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a730-account-create-spt82" Nov 24 12:15:36 crc kubenswrapper[4930]: I1124 12:15:36.010512 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k9s8c" event={"ID":"30dab223-0b89-4e97-a40d-6913ffa6e8b4","Type":"ContainerDied","Data":"d1c96f6ce4f553f5f5796af894f9bce71846b3d524a49c89d50779566464de2b"} Nov 24 12:15:36 crc kubenswrapper[4930]: I1124 12:15:36.010571 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c96f6ce4f553f5f5796af894f9bce71846b3d524a49c89d50779566464de2b" Nov 24 12:15:36 crc kubenswrapper[4930]: I1124 12:15:36.010587 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k9s8c" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.566514 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4jrg8"] Nov 24 12:15:37 crc kubenswrapper[4930]: E1124 12:15:37.567171 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30dab223-0b89-4e97-a40d-6913ffa6e8b4" containerName="mariadb-database-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567200 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="30dab223-0b89-4e97-a40d-6913ffa6e8b4" containerName="mariadb-database-create" Nov 24 12:15:37 crc kubenswrapper[4930]: E1124 12:15:37.567214 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c9eec6-bdfe-4456-a0ca-37c205ac5cba" containerName="mariadb-account-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567222 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c9eec6-bdfe-4456-a0ca-37c205ac5cba" containerName="mariadb-account-create" Nov 24 12:15:37 crc kubenswrapper[4930]: E1124 12:15:37.567243 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f798dc87-b597-474d-a8f3-5a46781865cd" containerName="mariadb-database-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567251 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f798dc87-b597-474d-a8f3-5a46781865cd" containerName="mariadb-database-create" Nov 24 12:15:37 crc kubenswrapper[4930]: E1124 12:15:37.567262 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec383ee-4477-4b17-be08-b1bdcea73a7f" containerName="mariadb-database-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567268 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec383ee-4477-4b17-be08-b1bdcea73a7f" containerName="mariadb-database-create" Nov 24 12:15:37 crc kubenswrapper[4930]: E1124 12:15:37.567275 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b0739a-ce35-40bb-929e-38d59642bd43" containerName="mariadb-account-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567281 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b0739a-ce35-40bb-929e-38d59642bd43" containerName="mariadb-account-create" Nov 24 12:15:37 crc kubenswrapper[4930]: E1124 12:15:37.567291 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545768db-9e2f-48e9-92a8-7eaa401eb0b0" containerName="mariadb-account-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567297 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="545768db-9e2f-48e9-92a8-7eaa401eb0b0" containerName="mariadb-account-create" Nov 24 12:15:37 crc kubenswrapper[4930]: E1124 12:15:37.567317 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066844af-3950-4700-84c4-3c1043ad05e7" containerName="swift-ring-rebalance" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567322 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="066844af-3950-4700-84c4-3c1043ad05e7" containerName="swift-ring-rebalance" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567469 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c9eec6-bdfe-4456-a0ca-37c205ac5cba" containerName="mariadb-account-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567483 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="30dab223-0b89-4e97-a40d-6913ffa6e8b4" containerName="mariadb-database-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567492 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="066844af-3950-4700-84c4-3c1043ad05e7" containerName="swift-ring-rebalance" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567499 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="f798dc87-b597-474d-a8f3-5a46781865cd" containerName="mariadb-database-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567508 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b0739a-ce35-40bb-929e-38d59642bd43" containerName="mariadb-account-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567517 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec383ee-4477-4b17-be08-b1bdcea73a7f" containerName="mariadb-database-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.567531 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="545768db-9e2f-48e9-92a8-7eaa401eb0b0" containerName="mariadb-account-create" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.568154 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.570194 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-b68t2" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.571732 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.580385 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4jrg8"] Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.713976 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-combined-ca-bundle\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.714048 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-config-data\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.714089 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-db-sync-config-data\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.714258 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwsg\" (UniqueName: \"kubernetes.io/projected/1f43c338-1b9c-402b-ad1b-28e4ee015c32-kube-api-access-sdwsg\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.815612 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-config-data\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.815694 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-db-sync-config-data\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.815736 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdwsg\" (UniqueName: \"kubernetes.io/projected/1f43c338-1b9c-402b-ad1b-28e4ee015c32-kube-api-access-sdwsg\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.815879 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-combined-ca-bundle\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.822055 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-config-data\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.822086 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-combined-ca-bundle\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.825262 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-db-sync-config-data\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.836621 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdwsg\" (UniqueName: \"kubernetes.io/projected/1f43c338-1b9c-402b-ad1b-28e4ee015c32-kube-api-access-sdwsg\") pod \"glance-db-sync-4jrg8\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:37 crc kubenswrapper[4930]: I1124 12:15:37.895331 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:38 crc kubenswrapper[4930]: I1124 12:15:38.452949 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4jrg8"] Nov 24 12:15:38 crc kubenswrapper[4930]: W1124 12:15:38.456267 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f43c338_1b9c_402b_ad1b_28e4ee015c32.slice/crio-52a0353c9926bf06d09d8e075fc5690372bd52fc7c589f74ab7746009be91332 WatchSource:0}: Error finding container 52a0353c9926bf06d09d8e075fc5690372bd52fc7c589f74ab7746009be91332: Status 404 returned error can't find the container with id 52a0353c9926bf06d09d8e075fc5690372bd52fc7c589f74ab7746009be91332 Nov 24 12:15:39 crc kubenswrapper[4930]: I1124 12:15:39.034077 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jrg8" event={"ID":"1f43c338-1b9c-402b-ad1b-28e4ee015c32","Type":"ContainerStarted","Data":"52a0353c9926bf06d09d8e075fc5690372bd52fc7c589f74ab7746009be91332"} Nov 24 12:15:42 crc kubenswrapper[4930]: I1124 12:15:42.161651 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.419680 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-r7nwq" podUID="ce96cb2b-064b-4d76-a101-df9f31c86314" containerName="ovn-controller" probeResult="failure" output=< Nov 24 12:15:43 crc kubenswrapper[4930]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 12:15:43 crc kubenswrapper[4930]: > Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.429125 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.430790 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-q5rmd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.649863 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-r7nwq-config-x2ppd"] Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.652053 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.657879 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.660163 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r7nwq-config-x2ppd"] Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.781809 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run-ovn\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.781860 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-scripts\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.782109 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-log-ovn\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.782154 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28fw8\" (UniqueName: \"kubernetes.io/projected/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-kube-api-access-28fw8\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.782182 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.782224 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-additional-scripts\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.883420 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28fw8\" (UniqueName: \"kubernetes.io/projected/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-kube-api-access-28fw8\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.883845 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.883876 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-additional-scripts\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.883950 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run-ovn\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.883983 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-scripts\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.884050 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-log-ovn\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.884085 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run-ovn\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.884053 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.884960 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-additional-scripts\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.885038 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-log-ovn\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.886486 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-scripts\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.904471 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28fw8\" (UniqueName: \"kubernetes.io/projected/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-kube-api-access-28fw8\") pod \"ovn-controller-r7nwq-config-x2ppd\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:43 crc kubenswrapper[4930]: I1124 12:15:43.985972 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:44 crc kubenswrapper[4930]: I1124 12:15:44.200243 4930 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podacf2e767-1d50-416b-aa31-16a1a6ee631c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podacf2e767-1d50-416b-aa31-16a1a6ee631c] : Timed out while waiting for systemd to remove kubepods-besteffort-podacf2e767_1d50_416b_aa31_16a1a6ee631c.slice" Nov 24 12:15:44 crc kubenswrapper[4930]: E1124 12:15:44.200494 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podacf2e767-1d50-416b-aa31-16a1a6ee631c] : unable to destroy cgroup paths for cgroup [kubepods besteffort podacf2e767-1d50-416b-aa31-16a1a6ee631c] : Timed out while waiting for systemd to remove kubepods-besteffort-podacf2e767_1d50_416b_aa31_16a1a6ee631c.slice" pod="openstack/dnsmasq-dns-6d8746976c-42wpq" podUID="acf2e767-1d50-416b-aa31-16a1a6ee631c" Nov 24 12:15:45 crc kubenswrapper[4930]: I1124 12:15:45.083344 4930 generic.go:334] "Generic (PLEG): container finished" podID="d35e6340-889e-4150-90c7-059417befffd" containerID="0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a" exitCode=0 Nov 24 12:15:45 crc kubenswrapper[4930]: I1124 12:15:45.083436 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d35e6340-889e-4150-90c7-059417befffd","Type":"ContainerDied","Data":"0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a"} Nov 24 12:15:45 crc kubenswrapper[4930]: I1124 12:15:45.083663 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-42wpq" Nov 24 12:15:45 crc kubenswrapper[4930]: I1124 12:15:45.198694 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-42wpq"] Nov 24 12:15:45 crc kubenswrapper[4930]: I1124 12:15:45.206690 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-42wpq"] Nov 24 12:15:46 crc kubenswrapper[4930]: I1124 12:15:46.103489 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf2e767-1d50-416b-aa31-16a1a6ee631c" path="/var/lib/kubelet/pods/acf2e767-1d50-416b-aa31-16a1a6ee631c/volumes" Nov 24 12:15:46 crc kubenswrapper[4930]: I1124 12:15:46.103989 4930 generic.go:334] "Generic (PLEG): container finished" podID="270a64e1-2837-47ac-860f-d616efdc6bbc" containerID="d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1" exitCode=0 Nov 24 12:15:46 crc kubenswrapper[4930]: I1124 12:15:46.104039 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"270a64e1-2837-47ac-860f-d616efdc6bbc","Type":"ContainerDied","Data":"d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1"} Nov 24 12:15:47 crc kubenswrapper[4930]: I1124 12:15:47.054034 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:47 crc kubenswrapper[4930]: I1124 12:15:47.069283 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652-etc-swift\") pod \"swift-storage-0\" (UID: \"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652\") " pod="openstack/swift-storage-0" Nov 24 12:15:47 crc kubenswrapper[4930]: I1124 12:15:47.089110 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 12:15:48 crc kubenswrapper[4930]: I1124 12:15:48.413382 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-r7nwq" podUID="ce96cb2b-064b-4d76-a101-df9f31c86314" containerName="ovn-controller" probeResult="failure" output=< Nov 24 12:15:48 crc kubenswrapper[4930]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 12:15:48 crc kubenswrapper[4930]: > Nov 24 12:15:49 crc kubenswrapper[4930]: I1124 12:15:49.429076 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r7nwq-config-x2ppd"] Nov 24 12:15:49 crc kubenswrapper[4930]: I1124 12:15:49.565518 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 12:15:49 crc kubenswrapper[4930]: W1124 12:15:49.587085 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc2e28ee_ab31_4a3a_b2a8_0b8c6baf1652.slice/crio-92c9d8e75a18c711fa7b220830dd6ec472a030dd6ca70a23a69a9f0259a00298 WatchSource:0}: Error finding container 92c9d8e75a18c711fa7b220830dd6ec472a030dd6ca70a23a69a9f0259a00298: Status 404 returned error can't find the container with id 92c9d8e75a18c711fa7b220830dd6ec472a030dd6ca70a23a69a9f0259a00298 Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.157969 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"270a64e1-2837-47ac-860f-d616efdc6bbc","Type":"ContainerStarted","Data":"a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf"} Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.158707 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.159714 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jrg8" event={"ID":"1f43c338-1b9c-402b-ad1b-28e4ee015c32","Type":"ContainerStarted","Data":"b2836654a2839f1564576da6434cc885ecbe54859abe20c0d5483aa8d36d466b"} Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.176060 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d35e6340-889e-4150-90c7-059417befffd","Type":"ContainerStarted","Data":"2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d"} Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.176324 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.178302 4930 generic.go:334] "Generic (PLEG): container finished" podID="336cac71-4c11-4e2b-82a8-5cb4a12aa68e" containerID="3f3df614aab9676be05589959fc29e0c09f36b69b61c72d2a912a1774e5702ea" exitCode=0 Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.178441 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r7nwq-config-x2ppd" event={"ID":"336cac71-4c11-4e2b-82a8-5cb4a12aa68e","Type":"ContainerDied","Data":"3f3df614aab9676be05589959fc29e0c09f36b69b61c72d2a912a1774e5702ea"} Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.178574 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r7nwq-config-x2ppd" event={"ID":"336cac71-4c11-4e2b-82a8-5cb4a12aa68e","Type":"ContainerStarted","Data":"a1bd7e97c2f8aa8abbb91d97b6aceea3ff5f0e39a06eb76a177dd73ca0007c38"} Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.180629 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"92c9d8e75a18c711fa7b220830dd6ec472a030dd6ca70a23a69a9f0259a00298"} Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.189615 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371963.665186 podStartE2EDuration="1m13.189590195s" podCreationTimestamp="2025-11-24 12:14:37 +0000 UTC" firstStartedPulling="2025-11-24 12:14:39.131238945 +0000 UTC m=+925.745566885" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:50.183970104 +0000 UTC m=+996.798298074" watchObservedRunningTime="2025-11-24 12:15:50.189590195 +0000 UTC m=+996.803918155" Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.213440 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.494095307 podStartE2EDuration="1m14.213418461s" podCreationTimestamp="2025-11-24 12:14:36 +0000 UTC" firstStartedPulling="2025-11-24 12:14:38.338338856 +0000 UTC m=+924.952666806" lastFinishedPulling="2025-11-24 12:15:11.05766201 +0000 UTC m=+957.671989960" observedRunningTime="2025-11-24 12:15:50.207863221 +0000 UTC m=+996.822191171" watchObservedRunningTime="2025-11-24 12:15:50.213418461 +0000 UTC m=+996.827746411" Nov 24 12:15:50 crc kubenswrapper[4930]: I1124 12:15:50.257234 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4jrg8" podStartSLOduration=2.6499417530000002 podStartE2EDuration="13.257207561s" podCreationTimestamp="2025-11-24 12:15:37 +0000 UTC" firstStartedPulling="2025-11-24 12:15:38.458690515 +0000 UTC m=+985.073018465" lastFinishedPulling="2025-11-24 12:15:49.065956323 +0000 UTC m=+995.680284273" observedRunningTime="2025-11-24 12:15:50.246030819 +0000 UTC m=+996.860358779" watchObservedRunningTime="2025-11-24 12:15:50.257207561 +0000 UTC m=+996.871535511" Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.206025 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"9352eb63e9649679bde459e5441b43ee83d76cbac73d0dd515b9250d3f555d26"} Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.764010 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.936166 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-additional-scripts\") pod \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.936414 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-scripts\") pod \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.936579 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run\") pod \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.936663 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run-ovn\") pod \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.936654 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run" (OuterVolumeSpecName: "var-run") pod "336cac71-4c11-4e2b-82a8-5cb4a12aa68e" (UID: "336cac71-4c11-4e2b-82a8-5cb4a12aa68e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.936685 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-log-ovn\") pod \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.936717 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "336cac71-4c11-4e2b-82a8-5cb4a12aa68e" (UID: "336cac71-4c11-4e2b-82a8-5cb4a12aa68e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.936722 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28fw8\" (UniqueName: \"kubernetes.io/projected/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-kube-api-access-28fw8\") pod \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\" (UID: \"336cac71-4c11-4e2b-82a8-5cb4a12aa68e\") " Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.936736 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "336cac71-4c11-4e2b-82a8-5cb4a12aa68e" (UID: "336cac71-4c11-4e2b-82a8-5cb4a12aa68e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.937027 4930 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.937043 4930 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.937051 4930 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.937074 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "336cac71-4c11-4e2b-82a8-5cb4a12aa68e" (UID: "336cac71-4c11-4e2b-82a8-5cb4a12aa68e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.937453 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-scripts" (OuterVolumeSpecName: "scripts") pod "336cac71-4c11-4e2b-82a8-5cb4a12aa68e" (UID: "336cac71-4c11-4e2b-82a8-5cb4a12aa68e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:51 crc kubenswrapper[4930]: I1124 12:15:51.943252 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-kube-api-access-28fw8" (OuterVolumeSpecName: "kube-api-access-28fw8") pod "336cac71-4c11-4e2b-82a8-5cb4a12aa68e" (UID: "336cac71-4c11-4e2b-82a8-5cb4a12aa68e"). InnerVolumeSpecName "kube-api-access-28fw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.038897 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28fw8\" (UniqueName: \"kubernetes.io/projected/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-kube-api-access-28fw8\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.038937 4930 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.038964 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/336cac71-4c11-4e2b-82a8-5cb4a12aa68e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.217766 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"6d313b28aaf1b380eeac2581b176d6f88c8baeecfa5cd3c0271fca118ffca8bc"} Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.219031 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"0065e5e6a06cd6b203afd324e954d9d826d6ffe27048deb5ea08a8d049061851"} Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.219101 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"033585bdb852f5b2be01800fb2bbad5eb0d115b945ff10a31716ab85fc77ed91"} Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.219894 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r7nwq-config-x2ppd" event={"ID":"336cac71-4c11-4e2b-82a8-5cb4a12aa68e","Type":"ContainerDied","Data":"a1bd7e97c2f8aa8abbb91d97b6aceea3ff5f0e39a06eb76a177dd73ca0007c38"} Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.219948 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1bd7e97c2f8aa8abbb91d97b6aceea3ff5f0e39a06eb76a177dd73ca0007c38" Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.220024 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r7nwq-config-x2ppd" Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.884783 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-r7nwq-config-x2ppd"] Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.890611 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-r7nwq-config-x2ppd"] Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.986331 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-r7nwq-config-gv59q"] Nov 24 12:15:52 crc kubenswrapper[4930]: E1124 12:15:52.986767 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336cac71-4c11-4e2b-82a8-5cb4a12aa68e" containerName="ovn-config" Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.986791 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="336cac71-4c11-4e2b-82a8-5cb4a12aa68e" containerName="ovn-config" Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.987024 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="336cac71-4c11-4e2b-82a8-5cb4a12aa68e" containerName="ovn-config" Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.987746 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:52 crc kubenswrapper[4930]: I1124 12:15:52.990138 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.022721 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r7nwq-config-gv59q"] Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.158183 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-additional-scripts\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.158246 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-log-ovn\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.158294 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qx8c\" (UniqueName: \"kubernetes.io/projected/4f92e3cd-3935-49ca-880b-f09c738024c5-kube-api-access-2qx8c\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.158321 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-scripts\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.158402 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run-ovn\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.158421 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.259495 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-additional-scripts\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.259583 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-log-ovn\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.259658 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qx8c\" (UniqueName: \"kubernetes.io/projected/4f92e3cd-3935-49ca-880b-f09c738024c5-kube-api-access-2qx8c\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.259697 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-scripts\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.259719 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run-ovn\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.259747 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.259945 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.259951 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-log-ovn\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.259980 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run-ovn\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.260254 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-additional-scripts\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.261919 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-scripts\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.278661 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qx8c\" (UniqueName: \"kubernetes.io/projected/4f92e3cd-3935-49ca-880b-f09c738024c5-kube-api-access-2qx8c\") pod \"ovn-controller-r7nwq-config-gv59q\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.313192 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.458972 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-r7nwq" Nov 24 12:15:53 crc kubenswrapper[4930]: I1124 12:15:53.850760 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-r7nwq-config-gv59q"] Nov 24 12:15:54 crc kubenswrapper[4930]: I1124 12:15:54.101922 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336cac71-4c11-4e2b-82a8-5cb4a12aa68e" path="/var/lib/kubelet/pods/336cac71-4c11-4e2b-82a8-5cb4a12aa68e/volumes" Nov 24 12:15:54 crc kubenswrapper[4930]: I1124 12:15:54.238815 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r7nwq-config-gv59q" event={"ID":"4f92e3cd-3935-49ca-880b-f09c738024c5","Type":"ContainerStarted","Data":"df70ae7af6a7506287afaa9a3009e3f0a234734d2aec886c043e992c8965b2c0"} Nov 24 12:15:54 crc kubenswrapper[4930]: I1124 12:15:54.239159 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r7nwq-config-gv59q" event={"ID":"4f92e3cd-3935-49ca-880b-f09c738024c5","Type":"ContainerStarted","Data":"f026c7be11129b66cf13fcbd624557e600bf5b946b1eecf8064cc078a46e7f38"} Nov 24 12:15:54 crc kubenswrapper[4930]: I1124 12:15:54.245001 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"7b9e9f14f0d4921520f996396fa6e9125eb5e3c70c209cb322e98ea362b904f3"} Nov 24 12:15:54 crc kubenswrapper[4930]: I1124 12:15:54.245051 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"c5fec27ef674bc1308e09ec1842a8f9e742a45f464e5c5fd6fde59bea600b570"} Nov 24 12:15:54 crc kubenswrapper[4930]: I1124 12:15:54.245067 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"d00a8fc8d361458d9e27e177ced2460c5132e76179ad1317d4bb92f3b1e9df1d"} Nov 24 12:15:54 crc kubenswrapper[4930]: I1124 12:15:54.245079 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"4f618b6dbacbf5a6fc37d17f3437b6047ea9e00684d05c31fd1192a8e6fbc43e"} Nov 24 12:15:54 crc kubenswrapper[4930]: I1124 12:15:54.262704 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-r7nwq-config-gv59q" podStartSLOduration=2.262656055 podStartE2EDuration="2.262656055s" podCreationTimestamp="2025-11-24 12:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:54.260127422 +0000 UTC m=+1000.874455382" watchObservedRunningTime="2025-11-24 12:15:54.262656055 +0000 UTC m=+1000.876984005" Nov 24 12:15:55 crc kubenswrapper[4930]: I1124 12:15:55.255847 4930 generic.go:334] "Generic (PLEG): container finished" podID="4f92e3cd-3935-49ca-880b-f09c738024c5" containerID="df70ae7af6a7506287afaa9a3009e3f0a234734d2aec886c043e992c8965b2c0" exitCode=0 Nov 24 12:15:55 crc kubenswrapper[4930]: I1124 12:15:55.255941 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-r7nwq-config-gv59q" event={"ID":"4f92e3cd-3935-49ca-880b-f09c738024c5","Type":"ContainerDied","Data":"df70ae7af6a7506287afaa9a3009e3f0a234734d2aec886c043e992c8965b2c0"} Nov 24 12:15:55 crc kubenswrapper[4930]: I1124 12:15:55.260329 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"221d6feb505f95f541bae29cb4dd738249d0c8cf81b5531b401c4d6ced7e73bd"} Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.277407 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"c9398d53fa041fa00794edb764c9517ae667558c52dd3111f276a1d534675181"} Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.277732 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"d3aff9ed74826e8ebbaae0d667ec6bbb5f003a86c1abc63879593c6730b2d082"} Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.277744 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"e51de74a431c4777f604b6e578bcad500d35cb7d20fd9bc65840068968622870"} Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.277753 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"2e3343a19213ae62bc18eb4edd4d00506632c4639db073eeffe5a1182da321b7"} Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.277762 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"fc244d66b7f4a6a6acd4f4d1be28326bd9751a040a1195c17f9e99f94c1ce156"} Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.277770 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652","Type":"ContainerStarted","Data":"cf102e83c3a8a118259b3ac7ec608b3c14c3905cd9ecaf2dd00f99064b209774"} Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.316375 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.829208537 podStartE2EDuration="43.316349695s" podCreationTimestamp="2025-11-24 12:15:13 +0000 UTC" firstStartedPulling="2025-11-24 12:15:49.589426821 +0000 UTC m=+996.203754771" lastFinishedPulling="2025-11-24 12:15:55.076567979 +0000 UTC m=+1001.690895929" observedRunningTime="2025-11-24 12:15:56.307451499 +0000 UTC m=+1002.921779489" watchObservedRunningTime="2025-11-24 12:15:56.316349695 +0000 UTC m=+1002.930677645" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.619409 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56766df65f-fpnzf"] Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.624018 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.629014 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-fpnzf"] Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.631318 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.652801 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.719752 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-config\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.719805 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-swift-storage-0\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.719834 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-nb\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.719865 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzj9m\" (UniqueName: \"kubernetes.io/projected/1d36af7c-358b-4880-912b-aaa6af827574-kube-api-access-hzj9m\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.719952 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-sb\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.720006 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-svc\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821200 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-scripts\") pod \"4f92e3cd-3935-49ca-880b-f09c738024c5\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821294 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run\") pod \"4f92e3cd-3935-49ca-880b-f09c738024c5\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821362 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-log-ovn\") pod \"4f92e3cd-3935-49ca-880b-f09c738024c5\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821367 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run" (OuterVolumeSpecName: "var-run") pod "4f92e3cd-3935-49ca-880b-f09c738024c5" (UID: "4f92e3cd-3935-49ca-880b-f09c738024c5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821407 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-additional-scripts\") pod \"4f92e3cd-3935-49ca-880b-f09c738024c5\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821430 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qx8c\" (UniqueName: \"kubernetes.io/projected/4f92e3cd-3935-49ca-880b-f09c738024c5-kube-api-access-2qx8c\") pod \"4f92e3cd-3935-49ca-880b-f09c738024c5\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821433 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4f92e3cd-3935-49ca-880b-f09c738024c5" (UID: "4f92e3cd-3935-49ca-880b-f09c738024c5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821557 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run-ovn\") pod \"4f92e3cd-3935-49ca-880b-f09c738024c5\" (UID: \"4f92e3cd-3935-49ca-880b-f09c738024c5\") " Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821732 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4f92e3cd-3935-49ca-880b-f09c738024c5" (UID: "4f92e3cd-3935-49ca-880b-f09c738024c5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821814 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-svc\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821921 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-config\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821952 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-swift-storage-0\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.821976 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-nb\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.822004 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzj9m\" (UniqueName: \"kubernetes.io/projected/1d36af7c-358b-4880-912b-aaa6af827574-kube-api-access-hzj9m\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.822063 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4f92e3cd-3935-49ca-880b-f09c738024c5" (UID: "4f92e3cd-3935-49ca-880b-f09c738024c5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.822168 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-sb\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.822320 4930 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.822336 4930 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.822347 4930 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f92e3cd-3935-49ca-880b-f09c738024c5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.822356 4930 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.822806 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-svc\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.822986 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-swift-storage-0\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.823299 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-scripts" (OuterVolumeSpecName: "scripts") pod "4f92e3cd-3935-49ca-880b-f09c738024c5" (UID: "4f92e3cd-3935-49ca-880b-f09c738024c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.823354 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-config\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.823766 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-sb\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.824145 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-nb\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.835161 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f92e3cd-3935-49ca-880b-f09c738024c5-kube-api-access-2qx8c" (OuterVolumeSpecName: "kube-api-access-2qx8c") pod "4f92e3cd-3935-49ca-880b-f09c738024c5" (UID: "4f92e3cd-3935-49ca-880b-f09c738024c5"). InnerVolumeSpecName "kube-api-access-2qx8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.842869 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzj9m\" (UniqueName: \"kubernetes.io/projected/1d36af7c-358b-4880-912b-aaa6af827574-kube-api-access-hzj9m\") pod \"dnsmasq-dns-56766df65f-fpnzf\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.924283 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f92e3cd-3935-49ca-880b-f09c738024c5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.924315 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qx8c\" (UniqueName: \"kubernetes.io/projected/4f92e3cd-3935-49ca-880b-f09c738024c5-kube-api-access-2qx8c\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.924445 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-r7nwq-config-gv59q"] Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.931899 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-r7nwq-config-gv59q"] Nov 24 12:15:56 crc kubenswrapper[4930]: I1124 12:15:56.973865 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:57 crc kubenswrapper[4930]: I1124 12:15:57.200218 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-fpnzf"] Nov 24 12:15:57 crc kubenswrapper[4930]: W1124 12:15:57.206575 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d36af7c_358b_4880_912b_aaa6af827574.slice/crio-9a011ad569c59727cb2821f336419f2270d526622ad913882388878a3645064a WatchSource:0}: Error finding container 9a011ad569c59727cb2821f336419f2270d526622ad913882388878a3645064a: Status 404 returned error can't find the container with id 9a011ad569c59727cb2821f336419f2270d526622ad913882388878a3645064a Nov 24 12:15:57 crc kubenswrapper[4930]: I1124 12:15:57.309751 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f026c7be11129b66cf13fcbd624557e600bf5b946b1eecf8064cc078a46e7f38" Nov 24 12:15:57 crc kubenswrapper[4930]: I1124 12:15:57.309765 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-r7nwq-config-gv59q" Nov 24 12:15:57 crc kubenswrapper[4930]: I1124 12:15:57.315980 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" event={"ID":"1d36af7c-358b-4880-912b-aaa6af827574","Type":"ContainerStarted","Data":"9a011ad569c59727cb2821f336419f2270d526622ad913882388878a3645064a"} Nov 24 12:15:57 crc kubenswrapper[4930]: I1124 12:15:57.318416 4930 generic.go:334] "Generic (PLEG): container finished" podID="1f43c338-1b9c-402b-ad1b-28e4ee015c32" containerID="b2836654a2839f1564576da6434cc885ecbe54859abe20c0d5483aa8d36d466b" exitCode=0 Nov 24 12:15:57 crc kubenswrapper[4930]: I1124 12:15:57.318530 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jrg8" event={"ID":"1f43c338-1b9c-402b-ad1b-28e4ee015c32","Type":"ContainerDied","Data":"b2836654a2839f1564576da6434cc885ecbe54859abe20c0d5483aa8d36d466b"} Nov 24 12:15:57 crc kubenswrapper[4930]: E1124 12:15:57.640566 4930 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d36af7c_358b_4880_912b_aaa6af827574.slice/crio-conmon-3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.094051 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f92e3cd-3935-49ca-880b-f09c738024c5" path="/var/lib/kubelet/pods/4f92e3cd-3935-49ca-880b-f09c738024c5/volumes" Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.327849 4930 generic.go:334] "Generic (PLEG): container finished" podID="1d36af7c-358b-4880-912b-aaa6af827574" containerID="3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d" exitCode=0 Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.327958 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" event={"ID":"1d36af7c-358b-4880-912b-aaa6af827574","Type":"ContainerDied","Data":"3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d"} Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.718722 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.854775 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-db-sync-config-data\") pod \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.855643 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-combined-ca-bundle\") pod \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.855761 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-config-data\") pod \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.856352 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdwsg\" (UniqueName: \"kubernetes.io/projected/1f43c338-1b9c-402b-ad1b-28e4ee015c32-kube-api-access-sdwsg\") pod \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\" (UID: \"1f43c338-1b9c-402b-ad1b-28e4ee015c32\") " Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.866029 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f43c338-1b9c-402b-ad1b-28e4ee015c32-kube-api-access-sdwsg" (OuterVolumeSpecName: "kube-api-access-sdwsg") pod "1f43c338-1b9c-402b-ad1b-28e4ee015c32" (UID: "1f43c338-1b9c-402b-ad1b-28e4ee015c32"). InnerVolumeSpecName "kube-api-access-sdwsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.870357 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f43c338-1b9c-402b-ad1b-28e4ee015c32" (UID: "1f43c338-1b9c-402b-ad1b-28e4ee015c32"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.897457 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f43c338-1b9c-402b-ad1b-28e4ee015c32" (UID: "1f43c338-1b9c-402b-ad1b-28e4ee015c32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.903693 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-config-data" (OuterVolumeSpecName: "config-data") pod "1f43c338-1b9c-402b-ad1b-28e4ee015c32" (UID: "1f43c338-1b9c-402b-ad1b-28e4ee015c32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.960156 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdwsg\" (UniqueName: \"kubernetes.io/projected/1f43c338-1b9c-402b-ad1b-28e4ee015c32-kube-api-access-sdwsg\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.960235 4930 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.960248 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:58 crc kubenswrapper[4930]: I1124 12:15:58.960309 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f43c338-1b9c-402b-ad1b-28e4ee015c32-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.337211 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" event={"ID":"1d36af7c-358b-4880-912b-aaa6af827574","Type":"ContainerStarted","Data":"30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9"} Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.337314 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.338332 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jrg8" event={"ID":"1f43c338-1b9c-402b-ad1b-28e4ee015c32","Type":"ContainerDied","Data":"52a0353c9926bf06d09d8e075fc5690372bd52fc7c589f74ab7746009be91332"} Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.338358 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a0353c9926bf06d09d8e075fc5690372bd52fc7c589f74ab7746009be91332" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.338401 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jrg8" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.370911 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" podStartSLOduration=3.370889355 podStartE2EDuration="3.370889355s" podCreationTimestamp="2025-11-24 12:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:59.362396201 +0000 UTC m=+1005.976724211" watchObservedRunningTime="2025-11-24 12:15:59.370889355 +0000 UTC m=+1005.985217325" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.751469 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-fpnzf"] Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.781687 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-ht7fg"] Nov 24 12:15:59 crc kubenswrapper[4930]: E1124 12:15:59.782025 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f43c338-1b9c-402b-ad1b-28e4ee015c32" containerName="glance-db-sync" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.782039 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f43c338-1b9c-402b-ad1b-28e4ee015c32" containerName="glance-db-sync" Nov 24 12:15:59 crc kubenswrapper[4930]: E1124 12:15:59.782067 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f92e3cd-3935-49ca-880b-f09c738024c5" containerName="ovn-config" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.782072 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f92e3cd-3935-49ca-880b-f09c738024c5" containerName="ovn-config" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.782238 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f43c338-1b9c-402b-ad1b-28e4ee015c32" containerName="glance-db-sync" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.782252 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f92e3cd-3935-49ca-880b-f09c738024c5" containerName="ovn-config" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.783139 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.812116 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-ht7fg"] Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.875111 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-svc\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.875159 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-nb\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.875182 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-sb\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.875209 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-swift-storage-0\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.875226 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txmg\" (UniqueName: \"kubernetes.io/projected/0d8d8acd-7227-4a01-aa30-ece579854880-kube-api-access-2txmg\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.875247 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-config\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.976370 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-svc\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.976421 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-nb\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.976443 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-sb\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.976467 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-swift-storage-0\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.976508 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2txmg\" (UniqueName: \"kubernetes.io/projected/0d8d8acd-7227-4a01-aa30-ece579854880-kube-api-access-2txmg\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.976528 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-config\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.977435 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-config\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.977958 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-svc\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.979878 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-swift-storage-0\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.980059 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-sb\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:15:59 crc kubenswrapper[4930]: I1124 12:15:59.980111 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-nb\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:16:00 crc kubenswrapper[4930]: I1124 12:15:59.999789 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2txmg\" (UniqueName: \"kubernetes.io/projected/0d8d8acd-7227-4a01-aa30-ece579854880-kube-api-access-2txmg\") pod \"dnsmasq-dns-6856c564b9-ht7fg\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:16:00 crc kubenswrapper[4930]: I1124 12:16:00.100096 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:16:00 crc kubenswrapper[4930]: I1124 12:16:00.545359 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-ht7fg"] Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.353359 4930 generic.go:334] "Generic (PLEG): container finished" podID="0d8d8acd-7227-4a01-aa30-ece579854880" containerID="a9fbdcc557f9d97dcd33d58f732b5714cebea9c2c676724340b71c9d180d5bec" exitCode=0 Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.353436 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" event={"ID":"0d8d8acd-7227-4a01-aa30-ece579854880","Type":"ContainerDied","Data":"a9fbdcc557f9d97dcd33d58f732b5714cebea9c2c676724340b71c9d180d5bec"} Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.353989 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" event={"ID":"0d8d8acd-7227-4a01-aa30-ece579854880","Type":"ContainerStarted","Data":"8a3eba635ec981a56abefd550543690aa88204b5a46fcafedd04521236e223ff"} Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.354131 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" podUID="1d36af7c-358b-4880-912b-aaa6af827574" containerName="dnsmasq-dns" containerID="cri-o://30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9" gracePeriod=10 Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.778122 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.809738 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.809814 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.906664 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-swift-storage-0\") pod \"1d36af7c-358b-4880-912b-aaa6af827574\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.906726 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-sb\") pod \"1d36af7c-358b-4880-912b-aaa6af827574\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.906762 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-svc\") pod \"1d36af7c-358b-4880-912b-aaa6af827574\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.906889 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-nb\") pod \"1d36af7c-358b-4880-912b-aaa6af827574\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.907012 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-config\") pod \"1d36af7c-358b-4880-912b-aaa6af827574\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.907056 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzj9m\" (UniqueName: \"kubernetes.io/projected/1d36af7c-358b-4880-912b-aaa6af827574-kube-api-access-hzj9m\") pod \"1d36af7c-358b-4880-912b-aaa6af827574\" (UID: \"1d36af7c-358b-4880-912b-aaa6af827574\") " Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.915787 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d36af7c-358b-4880-912b-aaa6af827574-kube-api-access-hzj9m" (OuterVolumeSpecName: "kube-api-access-hzj9m") pod "1d36af7c-358b-4880-912b-aaa6af827574" (UID: "1d36af7c-358b-4880-912b-aaa6af827574"). InnerVolumeSpecName "kube-api-access-hzj9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.947461 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d36af7c-358b-4880-912b-aaa6af827574" (UID: "1d36af7c-358b-4880-912b-aaa6af827574"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.948040 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d36af7c-358b-4880-912b-aaa6af827574" (UID: "1d36af7c-358b-4880-912b-aaa6af827574"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.950521 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d36af7c-358b-4880-912b-aaa6af827574" (UID: "1d36af7c-358b-4880-912b-aaa6af827574"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.960029 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-config" (OuterVolumeSpecName: "config") pod "1d36af7c-358b-4880-912b-aaa6af827574" (UID: "1d36af7c-358b-4880-912b-aaa6af827574"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:01 crc kubenswrapper[4930]: I1124 12:16:01.961320 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d36af7c-358b-4880-912b-aaa6af827574" (UID: "1d36af7c-358b-4880-912b-aaa6af827574"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.009124 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.009169 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzj9m\" (UniqueName: \"kubernetes.io/projected/1d36af7c-358b-4880-912b-aaa6af827574-kube-api-access-hzj9m\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.009187 4930 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.009202 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.009215 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.009226 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d36af7c-358b-4880-912b-aaa6af827574-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.365977 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" event={"ID":"0d8d8acd-7227-4a01-aa30-ece579854880","Type":"ContainerStarted","Data":"ac5e2b9c86ba50bdac4214c47e50148adab62996714929148d8dc612be306cc3"} Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.366125 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.368236 4930 generic.go:334] "Generic (PLEG): container finished" podID="1d36af7c-358b-4880-912b-aaa6af827574" containerID="30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9" exitCode=0 Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.368286 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.368298 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" event={"ID":"1d36af7c-358b-4880-912b-aaa6af827574","Type":"ContainerDied","Data":"30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9"} Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.368730 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-fpnzf" event={"ID":"1d36af7c-358b-4880-912b-aaa6af827574","Type":"ContainerDied","Data":"9a011ad569c59727cb2821f336419f2270d526622ad913882388878a3645064a"} Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.368767 4930 scope.go:117] "RemoveContainer" containerID="30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.386721 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" podStartSLOduration=3.38669627 podStartE2EDuration="3.38669627s" podCreationTimestamp="2025-11-24 12:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:02.383071266 +0000 UTC m=+1008.997399216" watchObservedRunningTime="2025-11-24 12:16:02.38669627 +0000 UTC m=+1009.001024220" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.390030 4930 scope.go:117] "RemoveContainer" containerID="3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.403741 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-fpnzf"] Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.411892 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-fpnzf"] Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.421504 4930 scope.go:117] "RemoveContainer" containerID="30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9" Nov 24 12:16:02 crc kubenswrapper[4930]: E1124 12:16:02.422125 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9\": container with ID starting with 30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9 not found: ID does not exist" containerID="30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.422159 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9"} err="failed to get container status \"30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9\": rpc error: code = NotFound desc = could not find container \"30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9\": container with ID starting with 30bb967b98f159c60a4f867a57b5ba0f6d0ed8368676615f7c5aca75cf482ea9 not found: ID does not exist" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.422183 4930 scope.go:117] "RemoveContainer" containerID="3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d" Nov 24 12:16:02 crc kubenswrapper[4930]: E1124 12:16:02.422469 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d\": container with ID starting with 3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d not found: ID does not exist" containerID="3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d" Nov 24 12:16:02 crc kubenswrapper[4930]: I1124 12:16:02.422505 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d"} err="failed to get container status \"3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d\": rpc error: code = NotFound desc = could not find container \"3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d\": container with ID starting with 3a85f07bcf5585bedd1661fe73fdebce4498aa3105769237e587ec98fb04a00d not found: ID does not exist" Nov 24 12:16:04 crc kubenswrapper[4930]: I1124 12:16:04.096998 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d36af7c-358b-4880-912b-aaa6af827574" path="/var/lib/kubelet/pods/1d36af7c-358b-4880-912b-aaa6af827574/volumes" Nov 24 12:16:07 crc kubenswrapper[4930]: I1124 12:16:07.682713 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:16:08 crc kubenswrapper[4930]: I1124 12:16:08.399998 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.445193 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-c4nnz"] Nov 24 12:16:09 crc kubenswrapper[4930]: E1124 12:16:09.445562 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d36af7c-358b-4880-912b-aaa6af827574" containerName="init" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.445586 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d36af7c-358b-4880-912b-aaa6af827574" containerName="init" Nov 24 12:16:09 crc kubenswrapper[4930]: E1124 12:16:09.445605 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d36af7c-358b-4880-912b-aaa6af827574" containerName="dnsmasq-dns" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.445611 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d36af7c-358b-4880-912b-aaa6af827574" containerName="dnsmasq-dns" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.445827 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d36af7c-358b-4880-912b-aaa6af827574" containerName="dnsmasq-dns" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.446437 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c4nnz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.465428 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-c4nnz"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.480800 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7748-account-create-np47j"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.482446 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7748-account-create-np47j" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.499340 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7748-account-create-np47j"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.499506 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.537404 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f40db6-9e11-4862-8b25-286a96f9b180-operator-scripts\") pod \"cinder-db-create-c4nnz\" (UID: \"c6f40db6-9e11-4862-8b25-286a96f9b180\") " pod="openstack/cinder-db-create-c4nnz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.537513 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0249865-90c2-41a0-9a76-54b0fa149773-operator-scripts\") pod \"cinder-7748-account-create-np47j\" (UID: \"a0249865-90c2-41a0-9a76-54b0fa149773\") " pod="openstack/cinder-7748-account-create-np47j" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.537620 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdt7b\" (UniqueName: \"kubernetes.io/projected/c6f40db6-9e11-4862-8b25-286a96f9b180-kube-api-access-gdt7b\") pod \"cinder-db-create-c4nnz\" (UID: \"c6f40db6-9e11-4862-8b25-286a96f9b180\") " pod="openstack/cinder-db-create-c4nnz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.537652 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh2tl\" (UniqueName: \"kubernetes.io/projected/a0249865-90c2-41a0-9a76-54b0fa149773-kube-api-access-bh2tl\") pod \"cinder-7748-account-create-np47j\" (UID: \"a0249865-90c2-41a0-9a76-54b0fa149773\") " pod="openstack/cinder-7748-account-create-np47j" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.551062 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hj97v"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.552408 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hj97v" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.590615 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hj97v"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.640636 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0249865-90c2-41a0-9a76-54b0fa149773-operator-scripts\") pod \"cinder-7748-account-create-np47j\" (UID: \"a0249865-90c2-41a0-9a76-54b0fa149773\") " pod="openstack/cinder-7748-account-create-np47j" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.640764 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdt7b\" (UniqueName: \"kubernetes.io/projected/c6f40db6-9e11-4862-8b25-286a96f9b180-kube-api-access-gdt7b\") pod \"cinder-db-create-c4nnz\" (UID: \"c6f40db6-9e11-4862-8b25-286a96f9b180\") " pod="openstack/cinder-db-create-c4nnz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.640802 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh2tl\" (UniqueName: \"kubernetes.io/projected/a0249865-90c2-41a0-9a76-54b0fa149773-kube-api-access-bh2tl\") pod \"cinder-7748-account-create-np47j\" (UID: \"a0249865-90c2-41a0-9a76-54b0fa149773\") " pod="openstack/cinder-7748-account-create-np47j" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.640832 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb8f1e7c-7332-451d-90b2-c437bdf80712-operator-scripts\") pod \"barbican-db-create-hj97v\" (UID: \"eb8f1e7c-7332-451d-90b2-c437bdf80712\") " pod="openstack/barbican-db-create-hj97v" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.640875 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98w6k\" (UniqueName: \"kubernetes.io/projected/eb8f1e7c-7332-451d-90b2-c437bdf80712-kube-api-access-98w6k\") pod \"barbican-db-create-hj97v\" (UID: \"eb8f1e7c-7332-451d-90b2-c437bdf80712\") " pod="openstack/barbican-db-create-hj97v" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.640898 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f40db6-9e11-4862-8b25-286a96f9b180-operator-scripts\") pod \"cinder-db-create-c4nnz\" (UID: \"c6f40db6-9e11-4862-8b25-286a96f9b180\") " pod="openstack/cinder-db-create-c4nnz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.641797 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f40db6-9e11-4862-8b25-286a96f9b180-operator-scripts\") pod \"cinder-db-create-c4nnz\" (UID: \"c6f40db6-9e11-4862-8b25-286a96f9b180\") " pod="openstack/cinder-db-create-c4nnz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.642443 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0249865-90c2-41a0-9a76-54b0fa149773-operator-scripts\") pod \"cinder-7748-account-create-np47j\" (UID: \"a0249865-90c2-41a0-9a76-54b0fa149773\") " pod="openstack/cinder-7748-account-create-np47j" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.681903 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh2tl\" (UniqueName: \"kubernetes.io/projected/a0249865-90c2-41a0-9a76-54b0fa149773-kube-api-access-bh2tl\") pod \"cinder-7748-account-create-np47j\" (UID: \"a0249865-90c2-41a0-9a76-54b0fa149773\") " pod="openstack/cinder-7748-account-create-np47j" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.692247 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdt7b\" (UniqueName: \"kubernetes.io/projected/c6f40db6-9e11-4862-8b25-286a96f9b180-kube-api-access-gdt7b\") pod \"cinder-db-create-c4nnz\" (UID: \"c6f40db6-9e11-4862-8b25-286a96f9b180\") " pod="openstack/cinder-db-create-c4nnz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.747674 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb8f1e7c-7332-451d-90b2-c437bdf80712-operator-scripts\") pod \"barbican-db-create-hj97v\" (UID: \"eb8f1e7c-7332-451d-90b2-c437bdf80712\") " pod="openstack/barbican-db-create-hj97v" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.747753 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98w6k\" (UniqueName: \"kubernetes.io/projected/eb8f1e7c-7332-451d-90b2-c437bdf80712-kube-api-access-98w6k\") pod \"barbican-db-create-hj97v\" (UID: \"eb8f1e7c-7332-451d-90b2-c437bdf80712\") " pod="openstack/barbican-db-create-hj97v" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.749275 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb8f1e7c-7332-451d-90b2-c437bdf80712-operator-scripts\") pod \"barbican-db-create-hj97v\" (UID: \"eb8f1e7c-7332-451d-90b2-c437bdf80712\") " pod="openstack/barbican-db-create-hj97v" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.781051 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c4nnz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.790737 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98w6k\" (UniqueName: \"kubernetes.io/projected/eb8f1e7c-7332-451d-90b2-c437bdf80712-kube-api-access-98w6k\") pod \"barbican-db-create-hj97v\" (UID: \"eb8f1e7c-7332-451d-90b2-c437bdf80712\") " pod="openstack/barbican-db-create-hj97v" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.801421 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3771-account-create-2h9v6"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.802610 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3771-account-create-2h9v6" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.806317 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.810388 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7748-account-create-np47j" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.814250 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rll74"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.816048 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.820935 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.823658 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.823715 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bt94b" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.827158 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rll74"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.837808 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.882041 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7mkzz"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.892985 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7mkzz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.893700 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hj97v" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.919336 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7mkzz"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.925907 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3771-account-create-2h9v6"] Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.953702 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr4gk\" (UniqueName: \"kubernetes.io/projected/643713cf-450a-4539-a94c-29718af0f1bd-kube-api-access-kr4gk\") pod \"neutron-db-create-7mkzz\" (UID: \"643713cf-450a-4539-a94c-29718af0f1bd\") " pod="openstack/neutron-db-create-7mkzz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.954110 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/643713cf-450a-4539-a94c-29718af0f1bd-operator-scripts\") pod \"neutron-db-create-7mkzz\" (UID: \"643713cf-450a-4539-a94c-29718af0f1bd\") " pod="openstack/neutron-db-create-7mkzz" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.954192 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-operator-scripts\") pod \"barbican-3771-account-create-2h9v6\" (UID: \"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24\") " pod="openstack/barbican-3771-account-create-2h9v6" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.954229 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-combined-ca-bundle\") pod \"keystone-db-sync-rll74\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.954267 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqwqn\" (UniqueName: \"kubernetes.io/projected/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-kube-api-access-bqwqn\") pod \"barbican-3771-account-create-2h9v6\" (UID: \"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24\") " pod="openstack/barbican-3771-account-create-2h9v6" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.954286 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-config-data\") pod \"keystone-db-sync-rll74\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:09 crc kubenswrapper[4930]: I1124 12:16:09.954317 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vq9m\" (UniqueName: \"kubernetes.io/projected/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-kube-api-access-7vq9m\") pod \"keystone-db-sync-rll74\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.052511 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a15c-account-create-f8snf"] Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.055229 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vq9m\" (UniqueName: \"kubernetes.io/projected/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-kube-api-access-7vq9m\") pod \"keystone-db-sync-rll74\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.055289 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr4gk\" (UniqueName: \"kubernetes.io/projected/643713cf-450a-4539-a94c-29718af0f1bd-kube-api-access-kr4gk\") pod \"neutron-db-create-7mkzz\" (UID: \"643713cf-450a-4539-a94c-29718af0f1bd\") " pod="openstack/neutron-db-create-7mkzz" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.055312 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/643713cf-450a-4539-a94c-29718af0f1bd-operator-scripts\") pod \"neutron-db-create-7mkzz\" (UID: \"643713cf-450a-4539-a94c-29718af0f1bd\") " pod="openstack/neutron-db-create-7mkzz" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.055377 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-operator-scripts\") pod \"barbican-3771-account-create-2h9v6\" (UID: \"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24\") " pod="openstack/barbican-3771-account-create-2h9v6" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.055410 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-combined-ca-bundle\") pod \"keystone-db-sync-rll74\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.055441 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqwqn\" (UniqueName: \"kubernetes.io/projected/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-kube-api-access-bqwqn\") pod \"barbican-3771-account-create-2h9v6\" (UID: \"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24\") " pod="openstack/barbican-3771-account-create-2h9v6" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.055463 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-config-data\") pod \"keystone-db-sync-rll74\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.056886 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/643713cf-450a-4539-a94c-29718af0f1bd-operator-scripts\") pod \"neutron-db-create-7mkzz\" (UID: \"643713cf-450a-4539-a94c-29718af0f1bd\") " pod="openstack/neutron-db-create-7mkzz" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.057870 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-operator-scripts\") pod \"barbican-3771-account-create-2h9v6\" (UID: \"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24\") " pod="openstack/barbican-3771-account-create-2h9v6" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.058086 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a15c-account-create-f8snf" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.060393 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-config-data\") pod \"keystone-db-sync-rll74\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.060920 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-combined-ca-bundle\") pod \"keystone-db-sync-rll74\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.062078 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.072091 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a15c-account-create-f8snf"] Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.079029 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr4gk\" (UniqueName: \"kubernetes.io/projected/643713cf-450a-4539-a94c-29718af0f1bd-kube-api-access-kr4gk\") pod \"neutron-db-create-7mkzz\" (UID: \"643713cf-450a-4539-a94c-29718af0f1bd\") " pod="openstack/neutron-db-create-7mkzz" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.079116 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqwqn\" (UniqueName: \"kubernetes.io/projected/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-kube-api-access-bqwqn\") pod \"barbican-3771-account-create-2h9v6\" (UID: \"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24\") " pod="openstack/barbican-3771-account-create-2h9v6" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.079710 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vq9m\" (UniqueName: \"kubernetes.io/projected/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-kube-api-access-7vq9m\") pod \"keystone-db-sync-rll74\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.101743 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.158936 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18617d-a48f-421a-b109-9bc576b4fb8f-operator-scripts\") pod \"neutron-a15c-account-create-f8snf\" (UID: \"7d18617d-a48f-421a-b109-9bc576b4fb8f\") " pod="openstack/neutron-a15c-account-create-f8snf" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.159016 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m54h6\" (UniqueName: \"kubernetes.io/projected/7d18617d-a48f-421a-b109-9bc576b4fb8f-kube-api-access-m54h6\") pod \"neutron-a15c-account-create-f8snf\" (UID: \"7d18617d-a48f-421a-b109-9bc576b4fb8f\") " pod="openstack/neutron-a15c-account-create-f8snf" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.173654 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-hsqrp"] Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.174728 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" podUID="5cd25a17-d530-48be-aac4-0011fc6c29f1" containerName="dnsmasq-dns" containerID="cri-o://8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204" gracePeriod=10 Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.212882 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3771-account-create-2h9v6" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.246882 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.260658 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18617d-a48f-421a-b109-9bc576b4fb8f-operator-scripts\") pod \"neutron-a15c-account-create-f8snf\" (UID: \"7d18617d-a48f-421a-b109-9bc576b4fb8f\") " pod="openstack/neutron-a15c-account-create-f8snf" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.260737 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m54h6\" (UniqueName: \"kubernetes.io/projected/7d18617d-a48f-421a-b109-9bc576b4fb8f-kube-api-access-m54h6\") pod \"neutron-a15c-account-create-f8snf\" (UID: \"7d18617d-a48f-421a-b109-9bc576b4fb8f\") " pod="openstack/neutron-a15c-account-create-f8snf" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.262173 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18617d-a48f-421a-b109-9bc576b4fb8f-operator-scripts\") pod \"neutron-a15c-account-create-f8snf\" (UID: \"7d18617d-a48f-421a-b109-9bc576b4fb8f\") " pod="openstack/neutron-a15c-account-create-f8snf" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.262193 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7mkzz" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.283298 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m54h6\" (UniqueName: \"kubernetes.io/projected/7d18617d-a48f-421a-b109-9bc576b4fb8f-kube-api-access-m54h6\") pod \"neutron-a15c-account-create-f8snf\" (UID: \"7d18617d-a48f-421a-b109-9bc576b4fb8f\") " pod="openstack/neutron-a15c-account-create-f8snf" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.344454 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7748-account-create-np47j"] Nov 24 12:16:10 crc kubenswrapper[4930]: W1124 12:16:10.355259 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0249865_90c2_41a0_9a76_54b0fa149773.slice/crio-3fa1e5939d404f3df7afa92160d47ffc7aac1bb53017368cde4976aa2d4a8857 WatchSource:0}: Error finding container 3fa1e5939d404f3df7afa92160d47ffc7aac1bb53017368cde4976aa2d4a8857: Status 404 returned error can't find the container with id 3fa1e5939d404f3df7afa92160d47ffc7aac1bb53017368cde4976aa2d4a8857 Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.389138 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a15c-account-create-f8snf" Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.444245 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7748-account-create-np47j" event={"ID":"a0249865-90c2-41a0-9a76-54b0fa149773","Type":"ContainerStarted","Data":"3fa1e5939d404f3df7afa92160d47ffc7aac1bb53017368cde4976aa2d4a8857"} Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.491560 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-c4nnz"] Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.576050 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hj97v"] Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.801509 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3771-account-create-2h9v6"] Nov 24 12:16:10 crc kubenswrapper[4930]: W1124 12:16:10.811989 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e36bea9_4d7c_4bc5_bc05_aaddf9cd3e24.slice/crio-d200e7c5436843acb4b1aa0d9500489924c2af30378ca0ee26671e986a46a29d WatchSource:0}: Error finding container d200e7c5436843acb4b1aa0d9500489924c2af30378ca0ee26671e986a46a29d: Status 404 returned error can't find the container with id d200e7c5436843acb4b1aa0d9500489924c2af30378ca0ee26671e986a46a29d Nov 24 12:16:10 crc kubenswrapper[4930]: I1124 12:16:10.934989 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rll74"] Nov 24 12:16:10 crc kubenswrapper[4930]: W1124 12:16:10.950650 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda24f8e38_6022_4f62_b5c5_4d42d7cd140c.slice/crio-f2bc3a960898fd0e55bbadd56b51588b9aa87e7827b42a2a1461e9b0876f9504 WatchSource:0}: Error finding container f2bc3a960898fd0e55bbadd56b51588b9aa87e7827b42a2a1461e9b0876f9504: Status 404 returned error can't find the container with id f2bc3a960898fd0e55bbadd56b51588b9aa87e7827b42a2a1461e9b0876f9504 Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.065588 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7mkzz"] Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.084868 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a15c-account-create-f8snf"] Nov 24 12:16:11 crc kubenswrapper[4930]: W1124 12:16:11.107184 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d18617d_a48f_421a_b109_9bc576b4fb8f.slice/crio-733821e18df8f2f89c93ae7f6f28bd267dca2fbdfe53925297931e1b0fee4e53 WatchSource:0}: Error finding container 733821e18df8f2f89c93ae7f6f28bd267dca2fbdfe53925297931e1b0fee4e53: Status 404 returned error can't find the container with id 733821e18df8f2f89c93ae7f6f28bd267dca2fbdfe53925297931e1b0fee4e53 Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.342492 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.455067 4930 generic.go:334] "Generic (PLEG): container finished" podID="5cd25a17-d530-48be-aac4-0011fc6c29f1" containerID="8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204" exitCode=0 Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.455148 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" event={"ID":"5cd25a17-d530-48be-aac4-0011fc6c29f1","Type":"ContainerDied","Data":"8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.455185 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" event={"ID":"5cd25a17-d530-48be-aac4-0011fc6c29f1","Type":"ContainerDied","Data":"2e00e7dfbcf5955f98d6e6acf8313e280779c0415d09153daa0290213e74654a"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.455207 4930 scope.go:117] "RemoveContainer" containerID="8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.455371 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-hsqrp" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.458039 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7mkzz" event={"ID":"643713cf-450a-4539-a94c-29718af0f1bd","Type":"ContainerStarted","Data":"a9468c8d2bf2591e15284940d9bee2701f1fabd0e331e198b29500afdd1677fc"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.458073 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7mkzz" event={"ID":"643713cf-450a-4539-a94c-29718af0f1bd","Type":"ContainerStarted","Data":"9bb7e980460230efae0b38e18e71ce2cc8c74ddae74bf446e86e57a24226bf39"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.462499 4930 generic.go:334] "Generic (PLEG): container finished" podID="c6f40db6-9e11-4862-8b25-286a96f9b180" containerID="21a0ca965c71dcf79238f478e5da9fb34749019005cecbb11d72f6fe66ebf76c" exitCode=0 Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.462782 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c4nnz" event={"ID":"c6f40db6-9e11-4862-8b25-286a96f9b180","Type":"ContainerDied","Data":"21a0ca965c71dcf79238f478e5da9fb34749019005cecbb11d72f6fe66ebf76c"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.462803 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c4nnz" event={"ID":"c6f40db6-9e11-4862-8b25-286a96f9b180","Type":"ContainerStarted","Data":"d5084abb6fd7e63ab0eefa2c337980b3ffbc0648569c8699d50dfff7a865b8fb"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.464492 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rll74" event={"ID":"a24f8e38-6022-4f62-b5c5-4d42d7cd140c","Type":"ContainerStarted","Data":"f2bc3a960898fd0e55bbadd56b51588b9aa87e7827b42a2a1461e9b0876f9504"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.465955 4930 generic.go:334] "Generic (PLEG): container finished" podID="eb8f1e7c-7332-451d-90b2-c437bdf80712" containerID="1aa1d637426fa4174d93d26a94a08a6edd24928ef5c5bb1fe1a4755c515aee76" exitCode=0 Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.465996 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hj97v" event={"ID":"eb8f1e7c-7332-451d-90b2-c437bdf80712","Type":"ContainerDied","Data":"1aa1d637426fa4174d93d26a94a08a6edd24928ef5c5bb1fe1a4755c515aee76"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.466011 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hj97v" event={"ID":"eb8f1e7c-7332-451d-90b2-c437bdf80712","Type":"ContainerStarted","Data":"77cb84022920cc0f0c0e5b055240851c678c4b1f99d70e0e1d4a6bfb0b1a17be"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.471888 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-7mkzz" podStartSLOduration=2.471868441 podStartE2EDuration="2.471868441s" podCreationTimestamp="2025-11-24 12:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:11.469923625 +0000 UTC m=+1018.084251575" watchObservedRunningTime="2025-11-24 12:16:11.471868441 +0000 UTC m=+1018.086196391" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.473591 4930 generic.go:334] "Generic (PLEG): container finished" podID="7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24" containerID="2a28da84a9b2baf8217c965579387afe450bbde92845fee47e64a7d7cba400c7" exitCode=0 Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.473699 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3771-account-create-2h9v6" event={"ID":"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24","Type":"ContainerDied","Data":"2a28da84a9b2baf8217c965579387afe450bbde92845fee47e64a7d7cba400c7"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.473729 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3771-account-create-2h9v6" event={"ID":"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24","Type":"ContainerStarted","Data":"d200e7c5436843acb4b1aa0d9500489924c2af30378ca0ee26671e986a46a29d"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.476866 4930 scope.go:117] "RemoveContainer" containerID="1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.481977 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-dns-svc\") pod \"5cd25a17-d530-48be-aac4-0011fc6c29f1\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.482131 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-config\") pod \"5cd25a17-d530-48be-aac4-0011fc6c29f1\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.482166 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-nb\") pod \"5cd25a17-d530-48be-aac4-0011fc6c29f1\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.482254 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgql9\" (UniqueName: \"kubernetes.io/projected/5cd25a17-d530-48be-aac4-0011fc6c29f1-kube-api-access-cgql9\") pod \"5cd25a17-d530-48be-aac4-0011fc6c29f1\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.482294 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-sb\") pod \"5cd25a17-d530-48be-aac4-0011fc6c29f1\" (UID: \"5cd25a17-d530-48be-aac4-0011fc6c29f1\") " Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.485307 4930 generic.go:334] "Generic (PLEG): container finished" podID="a0249865-90c2-41a0-9a76-54b0fa149773" containerID="ac55d35b8510a314eaf9e9bd2d6aa0b3175d4425e4bbf9e02bf9730df6b5d315" exitCode=0 Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.485385 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7748-account-create-np47j" event={"ID":"a0249865-90c2-41a0-9a76-54b0fa149773","Type":"ContainerDied","Data":"ac55d35b8510a314eaf9e9bd2d6aa0b3175d4425e4bbf9e02bf9730df6b5d315"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.489631 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd25a17-d530-48be-aac4-0011fc6c29f1-kube-api-access-cgql9" (OuterVolumeSpecName: "kube-api-access-cgql9") pod "5cd25a17-d530-48be-aac4-0011fc6c29f1" (UID: "5cd25a17-d530-48be-aac4-0011fc6c29f1"). InnerVolumeSpecName "kube-api-access-cgql9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.493057 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a15c-account-create-f8snf" event={"ID":"7d18617d-a48f-421a-b109-9bc576b4fb8f","Type":"ContainerStarted","Data":"6d38a8e0a3b04e2ea523e18a81834ab9fadccf8507fe435f13b6a9a2eabac9e9"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.493091 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a15c-account-create-f8snf" event={"ID":"7d18617d-a48f-421a-b109-9bc576b4fb8f","Type":"ContainerStarted","Data":"733821e18df8f2f89c93ae7f6f28bd267dca2fbdfe53925297931e1b0fee4e53"} Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.538147 4930 scope.go:117] "RemoveContainer" containerID="8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204" Nov 24 12:16:11 crc kubenswrapper[4930]: E1124 12:16:11.539529 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204\": container with ID starting with 8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204 not found: ID does not exist" containerID="8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.539605 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204"} err="failed to get container status \"8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204\": rpc error: code = NotFound desc = could not find container \"8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204\": container with ID starting with 8391c1065fcc7f01453861057e9c4813752e0d09c9777918552c7f5db44ff204 not found: ID does not exist" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.539635 4930 scope.go:117] "RemoveContainer" containerID="1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9" Nov 24 12:16:11 crc kubenswrapper[4930]: E1124 12:16:11.540135 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9\": container with ID starting with 1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9 not found: ID does not exist" containerID="1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.540200 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9"} err="failed to get container status \"1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9\": rpc error: code = NotFound desc = could not find container \"1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9\": container with ID starting with 1963ea197dfd85ea4a9ab13237b42688081b67abadb0ef757d3c132d3a122ab9 not found: ID does not exist" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.551809 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5cd25a17-d530-48be-aac4-0011fc6c29f1" (UID: "5cd25a17-d530-48be-aac4-0011fc6c29f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.552282 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5cd25a17-d530-48be-aac4-0011fc6c29f1" (UID: "5cd25a17-d530-48be-aac4-0011fc6c29f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.552295 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5cd25a17-d530-48be-aac4-0011fc6c29f1" (UID: "5cd25a17-d530-48be-aac4-0011fc6c29f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.555925 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-a15c-account-create-f8snf" podStartSLOduration=1.5559065090000002 podStartE2EDuration="1.555906509s" podCreationTimestamp="2025-11-24 12:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:11.550688729 +0000 UTC m=+1018.165016689" watchObservedRunningTime="2025-11-24 12:16:11.555906509 +0000 UTC m=+1018.170234459" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.565775 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-config" (OuterVolumeSpecName: "config") pod "5cd25a17-d530-48be-aac4-0011fc6c29f1" (UID: "5cd25a17-d530-48be-aac4-0011fc6c29f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.584276 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.584307 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.584319 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgql9\" (UniqueName: \"kubernetes.io/projected/5cd25a17-d530-48be-aac4-0011fc6c29f1-kube-api-access-cgql9\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.584328 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.584335 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cd25a17-d530-48be-aac4-0011fc6c29f1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.798597 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-hsqrp"] Nov 24 12:16:11 crc kubenswrapper[4930]: I1124 12:16:11.807933 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-hsqrp"] Nov 24 12:16:12 crc kubenswrapper[4930]: I1124 12:16:12.094263 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd25a17-d530-48be-aac4-0011fc6c29f1" path="/var/lib/kubelet/pods/5cd25a17-d530-48be-aac4-0011fc6c29f1/volumes" Nov 24 12:16:12 crc kubenswrapper[4930]: I1124 12:16:12.507712 4930 generic.go:334] "Generic (PLEG): container finished" podID="7d18617d-a48f-421a-b109-9bc576b4fb8f" containerID="6d38a8e0a3b04e2ea523e18a81834ab9fadccf8507fe435f13b6a9a2eabac9e9" exitCode=0 Nov 24 12:16:12 crc kubenswrapper[4930]: I1124 12:16:12.507853 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a15c-account-create-f8snf" event={"ID":"7d18617d-a48f-421a-b109-9bc576b4fb8f","Type":"ContainerDied","Data":"6d38a8e0a3b04e2ea523e18a81834ab9fadccf8507fe435f13b6a9a2eabac9e9"} Nov 24 12:16:12 crc kubenswrapper[4930]: I1124 12:16:12.513484 4930 generic.go:334] "Generic (PLEG): container finished" podID="643713cf-450a-4539-a94c-29718af0f1bd" containerID="a9468c8d2bf2591e15284940d9bee2701f1fabd0e331e198b29500afdd1677fc" exitCode=0 Nov 24 12:16:12 crc kubenswrapper[4930]: I1124 12:16:12.513935 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7mkzz" event={"ID":"643713cf-450a-4539-a94c-29718af0f1bd","Type":"ContainerDied","Data":"a9468c8d2bf2591e15284940d9bee2701f1fabd0e331e198b29500afdd1677fc"} Nov 24 12:16:12 crc kubenswrapper[4930]: I1124 12:16:12.990070 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3771-account-create-2h9v6" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.138377 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7748-account-create-np47j" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.143129 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqwqn\" (UniqueName: \"kubernetes.io/projected/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-kube-api-access-bqwqn\") pod \"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24\" (UID: \"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24\") " Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.143334 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-operator-scripts\") pod \"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24\" (UID: \"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24\") " Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.144254 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24" (UID: "7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.144511 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.150179 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c4nnz" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.150501 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-kube-api-access-bqwqn" (OuterVolumeSpecName: "kube-api-access-bqwqn") pod "7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24" (UID: "7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24"). InnerVolumeSpecName "kube-api-access-bqwqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.160889 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hj97v" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.245422 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh2tl\" (UniqueName: \"kubernetes.io/projected/a0249865-90c2-41a0-9a76-54b0fa149773-kube-api-access-bh2tl\") pod \"a0249865-90c2-41a0-9a76-54b0fa149773\" (UID: \"a0249865-90c2-41a0-9a76-54b0fa149773\") " Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.245493 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdt7b\" (UniqueName: \"kubernetes.io/projected/c6f40db6-9e11-4862-8b25-286a96f9b180-kube-api-access-gdt7b\") pod \"c6f40db6-9e11-4862-8b25-286a96f9b180\" (UID: \"c6f40db6-9e11-4862-8b25-286a96f9b180\") " Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.245602 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f40db6-9e11-4862-8b25-286a96f9b180-operator-scripts\") pod \"c6f40db6-9e11-4862-8b25-286a96f9b180\" (UID: \"c6f40db6-9e11-4862-8b25-286a96f9b180\") " Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.245635 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0249865-90c2-41a0-9a76-54b0fa149773-operator-scripts\") pod \"a0249865-90c2-41a0-9a76-54b0fa149773\" (UID: \"a0249865-90c2-41a0-9a76-54b0fa149773\") " Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.245650 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98w6k\" (UniqueName: \"kubernetes.io/projected/eb8f1e7c-7332-451d-90b2-c437bdf80712-kube-api-access-98w6k\") pod \"eb8f1e7c-7332-451d-90b2-c437bdf80712\" (UID: \"eb8f1e7c-7332-451d-90b2-c437bdf80712\") " Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.245792 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb8f1e7c-7332-451d-90b2-c437bdf80712-operator-scripts\") pod \"eb8f1e7c-7332-451d-90b2-c437bdf80712\" (UID: \"eb8f1e7c-7332-451d-90b2-c437bdf80712\") " Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.246129 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqwqn\" (UniqueName: \"kubernetes.io/projected/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24-kube-api-access-bqwqn\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.246779 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb8f1e7c-7332-451d-90b2-c437bdf80712-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb8f1e7c-7332-451d-90b2-c437bdf80712" (UID: "eb8f1e7c-7332-451d-90b2-c437bdf80712"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.247346 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f40db6-9e11-4862-8b25-286a96f9b180-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6f40db6-9e11-4862-8b25-286a96f9b180" (UID: "c6f40db6-9e11-4862-8b25-286a96f9b180"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.247417 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0249865-90c2-41a0-9a76-54b0fa149773-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0249865-90c2-41a0-9a76-54b0fa149773" (UID: "a0249865-90c2-41a0-9a76-54b0fa149773"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.254710 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8f1e7c-7332-451d-90b2-c437bdf80712-kube-api-access-98w6k" (OuterVolumeSpecName: "kube-api-access-98w6k") pod "eb8f1e7c-7332-451d-90b2-c437bdf80712" (UID: "eb8f1e7c-7332-451d-90b2-c437bdf80712"). InnerVolumeSpecName "kube-api-access-98w6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.261714 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0249865-90c2-41a0-9a76-54b0fa149773-kube-api-access-bh2tl" (OuterVolumeSpecName: "kube-api-access-bh2tl") pod "a0249865-90c2-41a0-9a76-54b0fa149773" (UID: "a0249865-90c2-41a0-9a76-54b0fa149773"). InnerVolumeSpecName "kube-api-access-bh2tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.261821 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f40db6-9e11-4862-8b25-286a96f9b180-kube-api-access-gdt7b" (OuterVolumeSpecName: "kube-api-access-gdt7b") pod "c6f40db6-9e11-4862-8b25-286a96f9b180" (UID: "c6f40db6-9e11-4862-8b25-286a96f9b180"). InnerVolumeSpecName "kube-api-access-gdt7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.347765 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb8f1e7c-7332-451d-90b2-c437bdf80712-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.348069 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh2tl\" (UniqueName: \"kubernetes.io/projected/a0249865-90c2-41a0-9a76-54b0fa149773-kube-api-access-bh2tl\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.348081 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdt7b\" (UniqueName: \"kubernetes.io/projected/c6f40db6-9e11-4862-8b25-286a96f9b180-kube-api-access-gdt7b\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.348090 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f40db6-9e11-4862-8b25-286a96f9b180-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.348098 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98w6k\" (UniqueName: \"kubernetes.io/projected/eb8f1e7c-7332-451d-90b2-c437bdf80712-kube-api-access-98w6k\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.348107 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0249865-90c2-41a0-9a76-54b0fa149773-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.534430 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7748-account-create-np47j" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.534450 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7748-account-create-np47j" event={"ID":"a0249865-90c2-41a0-9a76-54b0fa149773","Type":"ContainerDied","Data":"3fa1e5939d404f3df7afa92160d47ffc7aac1bb53017368cde4976aa2d4a8857"} Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.534497 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa1e5939d404f3df7afa92160d47ffc7aac1bb53017368cde4976aa2d4a8857" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.536506 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c4nnz" event={"ID":"c6f40db6-9e11-4862-8b25-286a96f9b180","Type":"ContainerDied","Data":"d5084abb6fd7e63ab0eefa2c337980b3ffbc0648569c8699d50dfff7a865b8fb"} Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.536534 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5084abb6fd7e63ab0eefa2c337980b3ffbc0648569c8699d50dfff7a865b8fb" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.536585 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c4nnz" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.538457 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hj97v" event={"ID":"eb8f1e7c-7332-451d-90b2-c437bdf80712","Type":"ContainerDied","Data":"77cb84022920cc0f0c0e5b055240851c678c4b1f99d70e0e1d4a6bfb0b1a17be"} Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.538495 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77cb84022920cc0f0c0e5b055240851c678c4b1f99d70e0e1d4a6bfb0b1a17be" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.538493 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hj97v" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.539828 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3771-account-create-2h9v6" event={"ID":"7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24","Type":"ContainerDied","Data":"d200e7c5436843acb4b1aa0d9500489924c2af30378ca0ee26671e986a46a29d"} Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.539872 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d200e7c5436843acb4b1aa0d9500489924c2af30378ca0ee26671e986a46a29d" Nov 24 12:16:13 crc kubenswrapper[4930]: I1124 12:16:13.540181 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3771-account-create-2h9v6" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.053758 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7mkzz" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.061030 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a15c-account-create-f8snf" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.196854 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/643713cf-450a-4539-a94c-29718af0f1bd-operator-scripts\") pod \"643713cf-450a-4539-a94c-29718af0f1bd\" (UID: \"643713cf-450a-4539-a94c-29718af0f1bd\") " Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.197265 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m54h6\" (UniqueName: \"kubernetes.io/projected/7d18617d-a48f-421a-b109-9bc576b4fb8f-kube-api-access-m54h6\") pod \"7d18617d-a48f-421a-b109-9bc576b4fb8f\" (UID: \"7d18617d-a48f-421a-b109-9bc576b4fb8f\") " Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.197777 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/643713cf-450a-4539-a94c-29718af0f1bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "643713cf-450a-4539-a94c-29718af0f1bd" (UID: "643713cf-450a-4539-a94c-29718af0f1bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.197942 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18617d-a48f-421a-b109-9bc576b4fb8f-operator-scripts\") pod \"7d18617d-a48f-421a-b109-9bc576b4fb8f\" (UID: \"7d18617d-a48f-421a-b109-9bc576b4fb8f\") " Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.198016 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr4gk\" (UniqueName: \"kubernetes.io/projected/643713cf-450a-4539-a94c-29718af0f1bd-kube-api-access-kr4gk\") pod \"643713cf-450a-4539-a94c-29718af0f1bd\" (UID: \"643713cf-450a-4539-a94c-29718af0f1bd\") " Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.198423 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/643713cf-450a-4539-a94c-29718af0f1bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.198756 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d18617d-a48f-421a-b109-9bc576b4fb8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d18617d-a48f-421a-b109-9bc576b4fb8f" (UID: "7d18617d-a48f-421a-b109-9bc576b4fb8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.202212 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/643713cf-450a-4539-a94c-29718af0f1bd-kube-api-access-kr4gk" (OuterVolumeSpecName: "kube-api-access-kr4gk") pod "643713cf-450a-4539-a94c-29718af0f1bd" (UID: "643713cf-450a-4539-a94c-29718af0f1bd"). InnerVolumeSpecName "kube-api-access-kr4gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.203788 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d18617d-a48f-421a-b109-9bc576b4fb8f-kube-api-access-m54h6" (OuterVolumeSpecName: "kube-api-access-m54h6") pod "7d18617d-a48f-421a-b109-9bc576b4fb8f" (UID: "7d18617d-a48f-421a-b109-9bc576b4fb8f"). InnerVolumeSpecName "kube-api-access-m54h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.299950 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m54h6\" (UniqueName: \"kubernetes.io/projected/7d18617d-a48f-421a-b109-9bc576b4fb8f-kube-api-access-m54h6\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.299987 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18617d-a48f-421a-b109-9bc576b4fb8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.300002 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr4gk\" (UniqueName: \"kubernetes.io/projected/643713cf-450a-4539-a94c-29718af0f1bd-kube-api-access-kr4gk\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.573133 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rll74" event={"ID":"a24f8e38-6022-4f62-b5c5-4d42d7cd140c","Type":"ContainerStarted","Data":"a59d7eb3f75edf836d5beb89b44d3608b0974951449769906a5008b201d810b2"} Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.581970 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a15c-account-create-f8snf" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.581986 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a15c-account-create-f8snf" event={"ID":"7d18617d-a48f-421a-b109-9bc576b4fb8f","Type":"ContainerDied","Data":"733821e18df8f2f89c93ae7f6f28bd267dca2fbdfe53925297931e1b0fee4e53"} Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.582037 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="733821e18df8f2f89c93ae7f6f28bd267dca2fbdfe53925297931e1b0fee4e53" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.585018 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7mkzz" event={"ID":"643713cf-450a-4539-a94c-29718af0f1bd","Type":"ContainerDied","Data":"9bb7e980460230efae0b38e18e71ce2cc8c74ddae74bf446e86e57a24226bf39"} Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.585043 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb7e980460230efae0b38e18e71ce2cc8c74ddae74bf446e86e57a24226bf39" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.585075 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7mkzz" Nov 24 12:16:16 crc kubenswrapper[4930]: I1124 12:16:16.612031 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rll74" podStartSLOduration=2.66054611 podStartE2EDuration="7.612010177s" podCreationTimestamp="2025-11-24 12:16:09 +0000 UTC" firstStartedPulling="2025-11-24 12:16:10.95313523 +0000 UTC m=+1017.567463180" lastFinishedPulling="2025-11-24 12:16:15.904599297 +0000 UTC m=+1022.518927247" observedRunningTime="2025-11-24 12:16:16.604028458 +0000 UTC m=+1023.218356738" watchObservedRunningTime="2025-11-24 12:16:16.612010177 +0000 UTC m=+1023.226338127" Nov 24 12:16:19 crc kubenswrapper[4930]: I1124 12:16:19.610500 4930 generic.go:334] "Generic (PLEG): container finished" podID="a24f8e38-6022-4f62-b5c5-4d42d7cd140c" containerID="a59d7eb3f75edf836d5beb89b44d3608b0974951449769906a5008b201d810b2" exitCode=0 Nov 24 12:16:19 crc kubenswrapper[4930]: I1124 12:16:19.610836 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rll74" event={"ID":"a24f8e38-6022-4f62-b5c5-4d42d7cd140c","Type":"ContainerDied","Data":"a59d7eb3f75edf836d5beb89b44d3608b0974951449769906a5008b201d810b2"} Nov 24 12:16:20 crc kubenswrapper[4930]: I1124 12:16:20.934380 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:20 crc kubenswrapper[4930]: I1124 12:16:20.980082 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-config-data\") pod \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " Nov 24 12:16:20 crc kubenswrapper[4930]: I1124 12:16:20.980226 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vq9m\" (UniqueName: \"kubernetes.io/projected/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-kube-api-access-7vq9m\") pod \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " Nov 24 12:16:20 crc kubenswrapper[4930]: I1124 12:16:20.980310 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-combined-ca-bundle\") pod \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\" (UID: \"a24f8e38-6022-4f62-b5c5-4d42d7cd140c\") " Nov 24 12:16:20 crc kubenswrapper[4930]: I1124 12:16:20.986728 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-kube-api-access-7vq9m" (OuterVolumeSpecName: "kube-api-access-7vq9m") pod "a24f8e38-6022-4f62-b5c5-4d42d7cd140c" (UID: "a24f8e38-6022-4f62-b5c5-4d42d7cd140c"). InnerVolumeSpecName "kube-api-access-7vq9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.010273 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a24f8e38-6022-4f62-b5c5-4d42d7cd140c" (UID: "a24f8e38-6022-4f62-b5c5-4d42d7cd140c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.028783 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-config-data" (OuterVolumeSpecName: "config-data") pod "a24f8e38-6022-4f62-b5c5-4d42d7cd140c" (UID: "a24f8e38-6022-4f62-b5c5-4d42d7cd140c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.082216 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.082261 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vq9m\" (UniqueName: \"kubernetes.io/projected/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-kube-api-access-7vq9m\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.082278 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24f8e38-6022-4f62-b5c5-4d42d7cd140c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.627296 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rll74" event={"ID":"a24f8e38-6022-4f62-b5c5-4d42d7cd140c","Type":"ContainerDied","Data":"f2bc3a960898fd0e55bbadd56b51588b9aa87e7827b42a2a1461e9b0876f9504"} Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.627342 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2bc3a960898fd0e55bbadd56b51588b9aa87e7827b42a2a1461e9b0876f9504" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.627399 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rll74" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.882463 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-wftds"] Nov 24 12:16:21 crc kubenswrapper[4930]: E1124 12:16:21.882947 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd25a17-d530-48be-aac4-0011fc6c29f1" containerName="dnsmasq-dns" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.882970 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd25a17-d530-48be-aac4-0011fc6c29f1" containerName="dnsmasq-dns" Nov 24 12:16:21 crc kubenswrapper[4930]: E1124 12:16:21.882993 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a24f8e38-6022-4f62-b5c5-4d42d7cd140c" containerName="keystone-db-sync" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883001 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24f8e38-6022-4f62-b5c5-4d42d7cd140c" containerName="keystone-db-sync" Nov 24 12:16:21 crc kubenswrapper[4930]: E1124 12:16:21.883013 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f40db6-9e11-4862-8b25-286a96f9b180" containerName="mariadb-database-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883022 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f40db6-9e11-4862-8b25-286a96f9b180" containerName="mariadb-database-create" Nov 24 12:16:21 crc kubenswrapper[4930]: E1124 12:16:21.883038 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd25a17-d530-48be-aac4-0011fc6c29f1" containerName="init" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883046 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd25a17-d530-48be-aac4-0011fc6c29f1" containerName="init" Nov 24 12:16:21 crc kubenswrapper[4930]: E1124 12:16:21.883057 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643713cf-450a-4539-a94c-29718af0f1bd" containerName="mariadb-database-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883065 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="643713cf-450a-4539-a94c-29718af0f1bd" containerName="mariadb-database-create" Nov 24 12:16:21 crc kubenswrapper[4930]: E1124 12:16:21.883084 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0249865-90c2-41a0-9a76-54b0fa149773" containerName="mariadb-account-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883093 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0249865-90c2-41a0-9a76-54b0fa149773" containerName="mariadb-account-create" Nov 24 12:16:21 crc kubenswrapper[4930]: E1124 12:16:21.883107 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24" containerName="mariadb-account-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883115 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24" containerName="mariadb-account-create" Nov 24 12:16:21 crc kubenswrapper[4930]: E1124 12:16:21.883129 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d18617d-a48f-421a-b109-9bc576b4fb8f" containerName="mariadb-account-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883137 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d18617d-a48f-421a-b109-9bc576b4fb8f" containerName="mariadb-account-create" Nov 24 12:16:21 crc kubenswrapper[4930]: E1124 12:16:21.883154 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8f1e7c-7332-451d-90b2-c437bdf80712" containerName="mariadb-database-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883161 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8f1e7c-7332-451d-90b2-c437bdf80712" containerName="mariadb-database-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883365 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd25a17-d530-48be-aac4-0011fc6c29f1" containerName="dnsmasq-dns" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883390 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d18617d-a48f-421a-b109-9bc576b4fb8f" containerName="mariadb-account-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883409 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24" containerName="mariadb-account-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883420 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8f1e7c-7332-451d-90b2-c437bdf80712" containerName="mariadb-database-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883431 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="a24f8e38-6022-4f62-b5c5-4d42d7cd140c" containerName="keystone-db-sync" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883447 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="643713cf-450a-4539-a94c-29718af0f1bd" containerName="mariadb-database-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883461 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0249865-90c2-41a0-9a76-54b0fa149773" containerName="mariadb-account-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.883473 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f40db6-9e11-4862-8b25-286a96f9b180" containerName="mariadb-database-create" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.884690 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.907164 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-wftds"] Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.938043 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4ldfj"] Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.939911 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.942383 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bt94b" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.944088 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.944722 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.945460 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.948904 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 12:16:21 crc kubenswrapper[4930]: I1124 12:16:21.964351 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4ldfj"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010560 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-swift-storage-0\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010621 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-combined-ca-bundle\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010651 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-config-data\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010679 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-svc\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010709 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-scripts\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010731 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-credential-keys\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010775 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-fernet-keys\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010817 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrrcf\" (UniqueName: \"kubernetes.io/projected/d7764f5a-c517-4436-8f32-634d40c2ea18-kube-api-access-lrrcf\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010847 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-sb\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010881 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-nb\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010925 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-config\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.010947 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5d6\" (UniqueName: \"kubernetes.io/projected/b5d3d54b-300a-4420-8114-f988d5b3d951-kube-api-access-zg5d6\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118510 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-config\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118624 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg5d6\" (UniqueName: \"kubernetes.io/projected/b5d3d54b-300a-4420-8114-f988d5b3d951-kube-api-access-zg5d6\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118675 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-swift-storage-0\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118700 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-combined-ca-bundle\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118729 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-config-data\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118758 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-svc\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118790 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-scripts\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118816 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-credential-keys\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118866 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-fernet-keys\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118911 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrrcf\" (UniqueName: \"kubernetes.io/projected/d7764f5a-c517-4436-8f32-634d40c2ea18-kube-api-access-lrrcf\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.118958 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-sb\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.119010 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-nb\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.120157 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-nb\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.120902 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-config\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.121320 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-svc\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.122749 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-swift-storage-0\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.124227 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-sb\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.132784 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-combined-ca-bundle\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.138754 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-scripts\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.151091 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-credential-keys\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.157217 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-config-data\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.172169 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-fernet-keys\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.196770 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b6dcc7c9c-rcxf9"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.198054 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.201241 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg5d6\" (UniqueName: \"kubernetes.io/projected/b5d3d54b-300a-4420-8114-f988d5b3d951-kube-api-access-zg5d6\") pod \"keystone-bootstrap-4ldfj\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.205383 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrrcf\" (UniqueName: \"kubernetes.io/projected/d7764f5a-c517-4436-8f32-634d40c2ea18-kube-api-access-lrrcf\") pod \"dnsmasq-dns-7dbf8bff67-wftds\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.209476 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.239100 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.239326 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.240056 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.246854 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-dl67f" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.262910 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.274944 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b6dcc7c9c-rcxf9"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.323551 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dced9a55-e50d-4a84-8876-25e6981347f8-horizon-secret-key\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.323605 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5chn\" (UniqueName: \"kubernetes.io/projected/dced9a55-e50d-4a84-8876-25e6981347f8-kube-api-access-r5chn\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.323630 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-config-data\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.323654 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dced9a55-e50d-4a84-8876-25e6981347f8-logs\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.323725 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-scripts\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.427491 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-scripts\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.427805 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dced9a55-e50d-4a84-8876-25e6981347f8-horizon-secret-key\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.427832 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-config-data\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.427852 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5chn\" (UniqueName: \"kubernetes.io/projected/dced9a55-e50d-4a84-8876-25e6981347f8-kube-api-access-r5chn\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.427877 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dced9a55-e50d-4a84-8876-25e6981347f8-logs\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.428274 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dced9a55-e50d-4a84-8876-25e6981347f8-logs\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.428437 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-scripts\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.429352 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-config-data\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.442029 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rcfd6"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.467254 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dced9a55-e50d-4a84-8876-25e6981347f8-horizon-secret-key\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.469857 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.495238 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.495695 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gd7zr" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.505215 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5chn\" (UniqueName: \"kubernetes.io/projected/dced9a55-e50d-4a84-8876-25e6981347f8-kube-api-access-r5chn\") pod \"horizon-5b6dcc7c9c-rcxf9\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.505656 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.534853 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-combined-ca-bundle\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.540023 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-scripts\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.536509 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zfb9w"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.543806 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-config-data\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.544284 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g62nv\" (UniqueName: \"kubernetes.io/projected/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-kube-api-access-g62nv\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.544417 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-db-sync-config-data\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.544532 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-etc-machine-id\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.559082 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.601839 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.601983 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.605472 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.612517 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.617228 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.617768 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.621696 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8xbcv" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.639104 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rcfd6"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.649166 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lshzn\" (UniqueName: \"kubernetes.io/projected/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-kube-api-access-lshzn\") pod \"neutron-db-sync-zfb9w\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.650449 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-combined-ca-bundle\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.650640 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-combined-ca-bundle\") pod \"neutron-db-sync-zfb9w\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.650762 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-scripts\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.650866 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-config-data\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.650960 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g62nv\" (UniqueName: \"kubernetes.io/projected/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-kube-api-access-g62nv\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.651100 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-db-sync-config-data\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.651394 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-config\") pod \"neutron-db-sync-zfb9w\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.651532 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-etc-machine-id\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.656223 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-etc-machine-id\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.672160 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-db-sync-config-data\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.685387 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zfb9w"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.692225 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-combined-ca-bundle\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.706116 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-config-data\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.712018 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-scripts\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.713287 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g62nv\" (UniqueName: \"kubernetes.io/projected/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-kube-api-access-g62nv\") pod \"cinder-db-sync-rcfd6\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.759088 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.759166 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-log-httpd\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.759276 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lshzn\" (UniqueName: \"kubernetes.io/projected/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-kube-api-access-lshzn\") pod \"neutron-db-sync-zfb9w\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.759377 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ww8l\" (UniqueName: \"kubernetes.io/projected/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-kube-api-access-8ww8l\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.759429 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-run-httpd\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.759516 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-config-data\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.759559 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-combined-ca-bundle\") pod \"neutron-db-sync-zfb9w\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.759742 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.759773 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-config\") pod \"neutron-db-sync-zfb9w\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.759788 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-scripts\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.771433 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-config\") pod \"neutron-db-sync-zfb9w\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.772415 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-combined-ca-bundle\") pod \"neutron-db-sync-zfb9w\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.781234 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.818345 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.832173 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lshzn\" (UniqueName: \"kubernetes.io/projected/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-kube-api-access-lshzn\") pod \"neutron-db-sync-zfb9w\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.859931 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.867869 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.868517 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-scripts\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.868608 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.868641 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-log-httpd\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.868694 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ww8l\" (UniqueName: \"kubernetes.io/projected/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-kube-api-access-8ww8l\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.868718 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-run-httpd\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.868764 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-config-data\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.871472 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.872549 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-log-httpd\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.879885 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.880127 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-config-data\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.880416 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-run-httpd\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.891588 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-scripts\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.895738 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-b68t2" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.896000 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.897150 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.897297 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.909678 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.909878 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.937619 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-drnpc"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.938831 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.943799 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.944191 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.944712 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-98lm4" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.958063 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ww8l\" (UniqueName: \"kubernetes.io/projected/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-kube-api-access-8ww8l\") pod \"ceilometer-0\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " pod="openstack/ceilometer-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.978593 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.978967 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.978992 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.979166 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-combined-ca-bundle\") pod \"barbican-db-sync-drnpc\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.979213 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-config-data\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.979320 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-scripts\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.979349 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-db-sync-config-data\") pod \"barbican-db-sync-drnpc\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.979395 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pb4\" (UniqueName: \"kubernetes.io/projected/35d6db25-381f-4f83-a033-984addf8da0d-kube-api-access-q4pb4\") pod \"barbican-db-sync-drnpc\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.979425 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.979447 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-logs\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:22 crc kubenswrapper[4930]: I1124 12:16:22.979484 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxc48\" (UniqueName: \"kubernetes.io/projected/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-kube-api-access-qxc48\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.017628 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.035921 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-pzqxp"] Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.037046 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.038001 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.039171 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.039328 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jr9fw" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.045010 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.091070 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-db-sync-config-data\") pod \"barbican-db-sync-drnpc\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.091139 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4pb4\" (UniqueName: \"kubernetes.io/projected/35d6db25-381f-4f83-a033-984addf8da0d-kube-api-access-q4pb4\") pod \"barbican-db-sync-drnpc\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.091190 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.091224 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-logs\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.091848 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxc48\" (UniqueName: \"kubernetes.io/projected/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-kube-api-access-qxc48\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.091906 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-combined-ca-bundle\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.091934 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-config-data\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.091960 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44fb1f8c-0796-4310-b053-8222837cfbf2-logs\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.092003 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.092080 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdfx9\" (UniqueName: \"kubernetes.io/projected/44fb1f8c-0796-4310-b053-8222837cfbf2-kube-api-access-fdfx9\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.092156 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.092220 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.092328 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-scripts\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.092391 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-combined-ca-bundle\") pod \"barbican-db-sync-drnpc\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.092430 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-config-data\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.092522 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-scripts\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.093261 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-wftds"] Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.094682 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.095105 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-logs\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.098531 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.098975 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-db-sync-config-data\") pod \"barbican-db-sync-drnpc\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:23 crc kubenswrapper[4930]: I1124 12:16:23.102234 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-scripts\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.115968 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-config-data\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.119623 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxc48\" (UniqueName: \"kubernetes.io/projected/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-kube-api-access-qxc48\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.121310 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.131708 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-drnpc"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.137083 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-pzqxp"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.137212 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.140394 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-combined-ca-bundle\") pod \"barbican-db-sync-drnpc\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.147936 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4pb4\" (UniqueName: \"kubernetes.io/projected/35d6db25-381f-4f83-a033-984addf8da0d-kube-api-access-q4pb4\") pod \"barbican-db-sync-drnpc\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.158588 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-vcnwb"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.160812 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.168745 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.174666 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-vcnwb"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.176314 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.181246 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.181478 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.183192 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.193857 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.193908 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.193938 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zffq8\" (UniqueName: \"kubernetes.io/projected/6bcc447f-d403-4536-8f54-f728fa999a19-kube-api-access-zffq8\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.193981 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-scripts\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194063 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-config\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194099 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194166 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194188 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-swift-storage-0\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194260 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2tgj\" (UniqueName: \"kubernetes.io/projected/78bef63c-9e71-4322-94ca-83b6815c2ecd-kube-api-access-f2tgj\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194331 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-sb\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194354 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194421 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-nb\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194454 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-svc\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194474 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-logs\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194585 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-combined-ca-bundle\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194619 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-config-data\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194655 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44fb1f8c-0796-4310-b053-8222837cfbf2-logs\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194731 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdfx9\" (UniqueName: \"kubernetes.io/projected/44fb1f8c-0796-4310-b053-8222837cfbf2-kube-api-access-fdfx9\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.194785 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.202292 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44fb1f8c-0796-4310-b053-8222837cfbf2-logs\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.209806 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56d56cd8f5-hxqgp"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.211433 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-scripts\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.211497 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.218476 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-combined-ca-bundle\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.221493 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-config-data\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.224148 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.235879 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdfx9\" (UniqueName: \"kubernetes.io/projected/44fb1f8c-0796-4310-b053-8222837cfbf2-kube-api-access-fdfx9\") pod \"placement-db-sync-pzqxp\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.235951 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56d56cd8f5-hxqgp"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.301707 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.319144 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.348326 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-config-data\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.352672 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-config\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.352725 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.352800 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.353571 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-swift-storage-0\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.353688 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-scripts\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.354163 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2tgj\" (UniqueName: \"kubernetes.io/projected/78bef63c-9e71-4322-94ca-83b6815c2ecd-kube-api-access-f2tgj\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.354242 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbd922a6-f938-478a-8db2-d99dc37f3a69-logs\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.354283 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-sb\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.354318 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.354365 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m78rf\" (UniqueName: \"kubernetes.io/projected/cbd922a6-f938-478a-8db2-d99dc37f3a69-kube-api-access-m78rf\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.354446 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-nb\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.354482 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-config\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.355498 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-sb\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.355093 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-svc\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.354477 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-svc\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.356071 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-logs\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.356247 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.356292 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.356313 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.356342 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zffq8\" (UniqueName: \"kubernetes.io/projected/6bcc447f-d403-4536-8f54-f728fa999a19-kube-api-access-zffq8\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.356382 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cbd922a6-f938-478a-8db2-d99dc37f3a69-horizon-secret-key\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.356713 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-swift-storage-0\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.358042 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-logs\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.358216 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.358457 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.358595 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.359045 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-nb\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.361565 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.370464 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.371086 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pzqxp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.374318 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.387218 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-wftds"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.422215 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zffq8\" (UniqueName: \"kubernetes.io/projected/6bcc447f-d403-4536-8f54-f728fa999a19-kube-api-access-zffq8\") pod \"dnsmasq-dns-76c58b6d97-vcnwb\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.427166 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2tgj\" (UniqueName: \"kubernetes.io/projected/78bef63c-9e71-4322-94ca-83b6815c2ecd-kube-api-access-f2tgj\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.458065 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cbd922a6-f938-478a-8db2-d99dc37f3a69-horizon-secret-key\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.458108 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-config-data\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.458159 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-scripts\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.458199 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbd922a6-f938-478a-8db2-d99dc37f3a69-logs\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.458228 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m78rf\" (UniqueName: \"kubernetes.io/projected/cbd922a6-f938-478a-8db2-d99dc37f3a69-kube-api-access-m78rf\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.462802 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.465774 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-scripts\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.466868 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-config-data\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.468256 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbd922a6-f938-478a-8db2-d99dc37f3a69-logs\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.472458 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cbd922a6-f938-478a-8db2-d99dc37f3a69-horizon-secret-key\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.483510 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m78rf\" (UniqueName: \"kubernetes.io/projected/cbd922a6-f938-478a-8db2-d99dc37f3a69-kube-api-access-m78rf\") pod \"horizon-56d56cd8f5-hxqgp\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.499942 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4ldfj"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.537710 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.554871 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.561207 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.686698 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" event={"ID":"d7764f5a-c517-4436-8f32-634d40c2ea18","Type":"ContainerStarted","Data":"baefea08987257f53a0b52dbf6e896b91bcb8c109a6811e303d1c5f3ec3ad13d"} Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:23.688217 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4ldfj" event={"ID":"b5d3d54b-300a-4420-8114-f988d5b3d951","Type":"ContainerStarted","Data":"2eb51df36b4c5dd0d08ee55eb055a38c3b3910a0a11db2536cc8aee0923377c3"} Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.517921 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.561702 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b6dcc7c9c-rcxf9"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.621776 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69679b8f55-8knvw"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.623606 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.636564 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.660268 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69679b8f55-8knvw"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.681281 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-scripts\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.681323 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rnp5\" (UniqueName: \"kubernetes.io/projected/9ecc66ca-44d6-4220-9f1c-2b054239f484-kube-api-access-7rnp5\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.681348 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ecc66ca-44d6-4220-9f1c-2b054239f484-horizon-secret-key\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.681456 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ecc66ca-44d6-4220-9f1c-2b054239f484-logs\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.681594 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-config-data\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.722350 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b6dcc7c9c-rcxf9"] Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.727774 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4ldfj" event={"ID":"b5d3d54b-300a-4420-8114-f988d5b3d951","Type":"ContainerStarted","Data":"d7e4690fd430981733b5b10b95243943f344eb7c87ebf4f521680c069ae3320f"} Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.751741 4930 generic.go:334] "Generic (PLEG): container finished" podID="d7764f5a-c517-4436-8f32-634d40c2ea18" containerID="477fd7068561e5dcaa5c5029ed50163dfef608d6ea7a092c9063af28cebfe5c9" exitCode=0 Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.752120 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" event={"ID":"d7764f5a-c517-4436-8f32-634d40c2ea18","Type":"ContainerDied","Data":"477fd7068561e5dcaa5c5029ed50163dfef608d6ea7a092c9063af28cebfe5c9"} Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.752809 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4ldfj" podStartSLOduration=3.7527930019999998 podStartE2EDuration="3.752793002s" podCreationTimestamp="2025-11-24 12:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:24.750026053 +0000 UTC m=+1031.364354003" watchObservedRunningTime="2025-11-24 12:16:24.752793002 +0000 UTC m=+1031.367120952" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.783608 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ecc66ca-44d6-4220-9f1c-2b054239f484-logs\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.783775 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-config-data\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.783857 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-scripts\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.783883 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rnp5\" (UniqueName: \"kubernetes.io/projected/9ecc66ca-44d6-4220-9f1c-2b054239f484-kube-api-access-7rnp5\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.783905 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ecc66ca-44d6-4220-9f1c-2b054239f484-horizon-secret-key\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.784118 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ecc66ca-44d6-4220-9f1c-2b054239f484-logs\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.785194 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-scripts\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.786399 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-config-data\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.791299 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ecc66ca-44d6-4220-9f1c-2b054239f484-horizon-secret-key\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.808872 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rnp5\" (UniqueName: \"kubernetes.io/projected/9ecc66ca-44d6-4220-9f1c-2b054239f484-kube-api-access-7rnp5\") pod \"horizon-69679b8f55-8knvw\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:24 crc kubenswrapper[4930]: I1124 12:16:24.980936 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.057468 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.277814 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.283348 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.296059 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-swift-storage-0\") pod \"d7764f5a-c517-4436-8f32-634d40c2ea18\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.296140 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-config\") pod \"d7764f5a-c517-4436-8f32-634d40c2ea18\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.296232 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-nb\") pod \"d7764f5a-c517-4436-8f32-634d40c2ea18\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.296276 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-svc\") pod \"d7764f5a-c517-4436-8f32-634d40c2ea18\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.296309 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-sb\") pod \"d7764f5a-c517-4436-8f32-634d40c2ea18\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.296339 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrrcf\" (UniqueName: \"kubernetes.io/projected/d7764f5a-c517-4436-8f32-634d40c2ea18-kube-api-access-lrrcf\") pod \"d7764f5a-c517-4436-8f32-634d40c2ea18\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.320682 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7764f5a-c517-4436-8f32-634d40c2ea18-kube-api-access-lrrcf" (OuterVolumeSpecName: "kube-api-access-lrrcf") pod "d7764f5a-c517-4436-8f32-634d40c2ea18" (UID: "d7764f5a-c517-4436-8f32-634d40c2ea18"). InnerVolumeSpecName "kube-api-access-lrrcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.380754 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7764f5a-c517-4436-8f32-634d40c2ea18" (UID: "d7764f5a-c517-4436-8f32-634d40c2ea18"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.382460 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-config" (OuterVolumeSpecName: "config") pod "d7764f5a-c517-4436-8f32-634d40c2ea18" (UID: "d7764f5a-c517-4436-8f32-634d40c2ea18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.407154 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.407202 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrrcf\" (UniqueName: \"kubernetes.io/projected/d7764f5a-c517-4436-8f32-634d40c2ea18-kube-api-access-lrrcf\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.407218 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.463497 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7764f5a-c517-4436-8f32-634d40c2ea18" (UID: "d7764f5a-c517-4436-8f32-634d40c2ea18"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.464783 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zfb9w"] Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.508385 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7764f5a-c517-4436-8f32-634d40c2ea18" (UID: "d7764f5a-c517-4436-8f32-634d40c2ea18"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.508453 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-pzqxp"] Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.508794 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-svc\") pod \"d7764f5a-c517-4436-8f32-634d40c2ea18\" (UID: \"d7764f5a-c517-4436-8f32-634d40c2ea18\") " Nov 24 12:16:25 crc kubenswrapper[4930]: W1124 12:16:25.509104 4930 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d7764f5a-c517-4436-8f32-634d40c2ea18/volumes/kubernetes.io~configmap/dns-svc Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.509117 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7764f5a-c517-4436-8f32-634d40c2ea18" (UID: "d7764f5a-c517-4436-8f32-634d40c2ea18"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.509522 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.509635 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.507357 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7764f5a-c517-4436-8f32-634d40c2ea18" (UID: "d7764f5a-c517-4436-8f32-634d40c2ea18"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.538814 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-drnpc"] Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.553587 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rcfd6"] Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.564654 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-vcnwb"] Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.587356 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56d56cd8f5-hxqgp"] Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.595967 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.613437 4930 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7764f5a-c517-4436-8f32-634d40c2ea18-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.615743 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69679b8f55-8knvw"] Nov 24 12:16:25 crc kubenswrapper[4930]: W1124 12:16:25.629896 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ecc66ca_44d6_4220_9f1c_2b054239f484.slice/crio-a8cd585f53347b746b8db09b2655451576e6a0b42d8c497541831750a492d509 WatchSource:0}: Error finding container a8cd585f53347b746b8db09b2655451576e6a0b42d8c497541831750a492d509: Status 404 returned error can't find the container with id a8cd585f53347b746b8db09b2655451576e6a0b42d8c497541831750a492d509 Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.769228 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" event={"ID":"d7764f5a-c517-4436-8f32-634d40c2ea18","Type":"ContainerDied","Data":"baefea08987257f53a0b52dbf6e896b91bcb8c109a6811e303d1c5f3ec3ad13d"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.769292 4930 scope.go:117] "RemoveContainer" containerID="477fd7068561e5dcaa5c5029ed50163dfef608d6ea7a092c9063af28cebfe5c9" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.769416 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-wftds" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.774187 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pzqxp" event={"ID":"44fb1f8c-0796-4310-b053-8222837cfbf2","Type":"ContainerStarted","Data":"a96e3bcfc422663b448d327841ceb08d0138d74afbe86d8763d5b4af252fc39e"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.787511 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6dcc7c9c-rcxf9" event={"ID":"dced9a55-e50d-4a84-8876-25e6981347f8","Type":"ContainerStarted","Data":"b27b34f1f43487985c502542542042a1344e660e413b3f934cb76b50ac3f0330"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.790813 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zfb9w" event={"ID":"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f","Type":"ContainerStarted","Data":"b007efcfdd76a4d37c2d71ef23b765c5eda88f96cff9adb874ea675f244c3c32"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.796993 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-drnpc" event={"ID":"35d6db25-381f-4f83-a033-984addf8da0d","Type":"ContainerStarted","Data":"ad22efb08110d1ec71e9431dfa64d735532133a6fb490533b95331ec6fcf3393"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.804091 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56d56cd8f5-hxqgp" event={"ID":"cbd922a6-f938-478a-8db2-d99dc37f3a69","Type":"ContainerStarted","Data":"888b4987be8aebaeb2beffcba12a085a3d5413a7aff327992c12b22dfd451c8d"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.810326 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69679b8f55-8knvw" event={"ID":"9ecc66ca-44d6-4220-9f1c-2b054239f484","Type":"ContainerStarted","Data":"a8cd585f53347b746b8db09b2655451576e6a0b42d8c497541831750a492d509"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.817262 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9","Type":"ContainerStarted","Data":"c908d64474eb25486fdefb2cbd39d7d875305ce8740668855e34de9a11fa9270"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.822663 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rcfd6" event={"ID":"9169db1f-c94f-45a3-bc97-6ad40d17b7d1","Type":"ContainerStarted","Data":"6e051fbb02b7de4fe8f850bb89144fa0dd346476448a91def85fecbd8bee41d0"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.824738 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zfb9w" podStartSLOduration=3.824693477 podStartE2EDuration="3.824693477s" podCreationTimestamp="2025-11-24 12:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:25.813366481 +0000 UTC m=+1032.427694441" watchObservedRunningTime="2025-11-24 12:16:25.824693477 +0000 UTC m=+1032.439021427" Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.828575 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78bef63c-9e71-4322-94ca-83b6815c2ecd","Type":"ContainerStarted","Data":"8b079d52373bcd99052bbcaca383851bfd8433ba9dcbf7f793478267787fd523"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.833688 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" event={"ID":"6bcc447f-d403-4536-8f54-f728fa999a19","Type":"ContainerStarted","Data":"73ccdc8bd1365c0f92131701df327d81693b62df7b819b5e4660b7938ddb4b9c"} Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.892667 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-wftds"] Nov 24 12:16:25 crc kubenswrapper[4930]: I1124 12:16:25.904598 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-wftds"] Nov 24 12:16:26 crc kubenswrapper[4930]: I1124 12:16:26.099686 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7764f5a-c517-4436-8f32-634d40c2ea18" path="/var/lib/kubelet/pods/d7764f5a-c517-4436-8f32-634d40c2ea18/volumes" Nov 24 12:16:26 crc kubenswrapper[4930]: I1124 12:16:26.563767 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:16:26 crc kubenswrapper[4930]: W1124 12:16:26.593216 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d8b71e2_5e0b_4c52_b6b9_4f4caa4da884.slice/crio-574a45477bc327a12f85f914bb9f9afc4569ee4f827dbe42e6da49632e5c0ea5 WatchSource:0}: Error finding container 574a45477bc327a12f85f914bb9f9afc4569ee4f827dbe42e6da49632e5c0ea5: Status 404 returned error can't find the container with id 574a45477bc327a12f85f914bb9f9afc4569ee4f827dbe42e6da49632e5c0ea5 Nov 24 12:16:26 crc kubenswrapper[4930]: I1124 12:16:26.850991 4930 generic.go:334] "Generic (PLEG): container finished" podID="6bcc447f-d403-4536-8f54-f728fa999a19" containerID="b8836368a310634418cbe3d6d709a21b5f8c7b65f9e1b592dedf4c1375d4e848" exitCode=0 Nov 24 12:16:26 crc kubenswrapper[4930]: I1124 12:16:26.851064 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" event={"ID":"6bcc447f-d403-4536-8f54-f728fa999a19","Type":"ContainerDied","Data":"b8836368a310634418cbe3d6d709a21b5f8c7b65f9e1b592dedf4c1375d4e848"} Nov 24 12:16:26 crc kubenswrapper[4930]: I1124 12:16:26.855450 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884","Type":"ContainerStarted","Data":"574a45477bc327a12f85f914bb9f9afc4569ee4f827dbe42e6da49632e5c0ea5"} Nov 24 12:16:26 crc kubenswrapper[4930]: I1124 12:16:26.890274 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zfb9w" event={"ID":"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f","Type":"ContainerStarted","Data":"238acbf81621feccb913f177fdbc5cc93e7434423a7af655da8b2ef55e2d92f9"} Nov 24 12:16:26 crc kubenswrapper[4930]: I1124 12:16:26.913794 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78bef63c-9e71-4322-94ca-83b6815c2ecd","Type":"ContainerStarted","Data":"e5e39d826bd8bc27461b4728d5bd01ee4f94182a323943518f25330da3ebfe11"} Nov 24 12:16:27 crc kubenswrapper[4930]: I1124 12:16:27.971360 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884","Type":"ContainerStarted","Data":"87f370abca673d2f0663176501108ee245f8be019fe0986f466cea51e425e4ef"} Nov 24 12:16:27 crc kubenswrapper[4930]: I1124 12:16:27.980782 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78bef63c-9e71-4322-94ca-83b6815c2ecd","Type":"ContainerStarted","Data":"6f7648c425d4f0cd816a773566da1f9519c4f58668e1e00c92d06218f99f14d8"} Nov 24 12:16:27 crc kubenswrapper[4930]: I1124 12:16:27.980955 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="78bef63c-9e71-4322-94ca-83b6815c2ecd" containerName="glance-log" containerID="cri-o://e5e39d826bd8bc27461b4728d5bd01ee4f94182a323943518f25330da3ebfe11" gracePeriod=30 Nov 24 12:16:27 crc kubenswrapper[4930]: I1124 12:16:27.981579 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="78bef63c-9e71-4322-94ca-83b6815c2ecd" containerName="glance-httpd" containerID="cri-o://6f7648c425d4f0cd816a773566da1f9519c4f58668e1e00c92d06218f99f14d8" gracePeriod=30 Nov 24 12:16:27 crc kubenswrapper[4930]: I1124 12:16:27.986052 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" event={"ID":"6bcc447f-d403-4536-8f54-f728fa999a19","Type":"ContainerStarted","Data":"935b0f5442c3bfffb88a6a25069913cd16ac8f8b9b0c2938e5f0e6d3ef9a5574"} Nov 24 12:16:28 crc kubenswrapper[4930]: I1124 12:16:28.017002 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.016980913 podStartE2EDuration="6.016980913s" podCreationTimestamp="2025-11-24 12:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:28.004055431 +0000 UTC m=+1034.618383401" watchObservedRunningTime="2025-11-24 12:16:28.016980913 +0000 UTC m=+1034.631308863" Nov 24 12:16:28 crc kubenswrapper[4930]: I1124 12:16:28.034841 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" podStartSLOduration=6.034823966 podStartE2EDuration="6.034823966s" podCreationTimestamp="2025-11-24 12:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:28.028459673 +0000 UTC m=+1034.642787643" watchObservedRunningTime="2025-11-24 12:16:28.034823966 +0000 UTC m=+1034.649151916" Nov 24 12:16:28 crc kubenswrapper[4930]: E1124 12:16:28.405669 4930 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78bef63c_9e71_4322_94ca_83b6815c2ecd.slice/crio-conmon-6f7648c425d4f0cd816a773566da1f9519c4f58668e1e00c92d06218f99f14d8.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:16:28 crc kubenswrapper[4930]: I1124 12:16:28.538211 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:29 crc kubenswrapper[4930]: I1124 12:16:29.022886 4930 generic.go:334] "Generic (PLEG): container finished" podID="b5d3d54b-300a-4420-8114-f988d5b3d951" containerID="d7e4690fd430981733b5b10b95243943f344eb7c87ebf4f521680c069ae3320f" exitCode=0 Nov 24 12:16:29 crc kubenswrapper[4930]: I1124 12:16:29.022972 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4ldfj" event={"ID":"b5d3d54b-300a-4420-8114-f988d5b3d951","Type":"ContainerDied","Data":"d7e4690fd430981733b5b10b95243943f344eb7c87ebf4f521680c069ae3320f"} Nov 24 12:16:29 crc kubenswrapper[4930]: I1124 12:16:29.032988 4930 generic.go:334] "Generic (PLEG): container finished" podID="78bef63c-9e71-4322-94ca-83b6815c2ecd" containerID="6f7648c425d4f0cd816a773566da1f9519c4f58668e1e00c92d06218f99f14d8" exitCode=0 Nov 24 12:16:29 crc kubenswrapper[4930]: I1124 12:16:29.033036 4930 generic.go:334] "Generic (PLEG): container finished" podID="78bef63c-9e71-4322-94ca-83b6815c2ecd" containerID="e5e39d826bd8bc27461b4728d5bd01ee4f94182a323943518f25330da3ebfe11" exitCode=143 Nov 24 12:16:29 crc kubenswrapper[4930]: I1124 12:16:29.033118 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78bef63c-9e71-4322-94ca-83b6815c2ecd","Type":"ContainerDied","Data":"6f7648c425d4f0cd816a773566da1f9519c4f58668e1e00c92d06218f99f14d8"} Nov 24 12:16:29 crc kubenswrapper[4930]: I1124 12:16:29.033156 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78bef63c-9e71-4322-94ca-83b6815c2ecd","Type":"ContainerDied","Data":"e5e39d826bd8bc27461b4728d5bd01ee4f94182a323943518f25330da3ebfe11"} Nov 24 12:16:29 crc kubenswrapper[4930]: I1124 12:16:29.043677 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884","Type":"ContainerStarted","Data":"b6ad1815a5ae0bf311b7cded1a10169fdfcd1539e9c3b8e1da7e387ad44265b2"} Nov 24 12:16:29 crc kubenswrapper[4930]: I1124 12:16:29.043791 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" containerName="glance-log" containerID="cri-o://87f370abca673d2f0663176501108ee245f8be019fe0986f466cea51e425e4ef" gracePeriod=30 Nov 24 12:16:29 crc kubenswrapper[4930]: I1124 12:16:29.043828 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" containerName="glance-httpd" containerID="cri-o://b6ad1815a5ae0bf311b7cded1a10169fdfcd1539e9c3b8e1da7e387ad44265b2" gracePeriod=30 Nov 24 12:16:29 crc kubenswrapper[4930]: I1124 12:16:29.110462 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.110438169 podStartE2EDuration="7.110438169s" podCreationTimestamp="2025-11-24 12:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:29.098721522 +0000 UTC m=+1035.713049492" watchObservedRunningTime="2025-11-24 12:16:29.110438169 +0000 UTC m=+1035.724766119" Nov 24 12:16:30 crc kubenswrapper[4930]: I1124 12:16:30.055436 4930 generic.go:334] "Generic (PLEG): container finished" podID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" containerID="b6ad1815a5ae0bf311b7cded1a10169fdfcd1539e9c3b8e1da7e387ad44265b2" exitCode=0 Nov 24 12:16:30 crc kubenswrapper[4930]: I1124 12:16:30.055793 4930 generic.go:334] "Generic (PLEG): container finished" podID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" containerID="87f370abca673d2f0663176501108ee245f8be019fe0986f466cea51e425e4ef" exitCode=143 Nov 24 12:16:30 crc kubenswrapper[4930]: I1124 12:16:30.055661 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884","Type":"ContainerDied","Data":"b6ad1815a5ae0bf311b7cded1a10169fdfcd1539e9c3b8e1da7e387ad44265b2"} Nov 24 12:16:30 crc kubenswrapper[4930]: I1124 12:16:30.056002 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884","Type":"ContainerDied","Data":"87f370abca673d2f0663176501108ee245f8be019fe0986f466cea51e425e4ef"} Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.439202 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56d56cd8f5-hxqgp"] Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.482618 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69b96dd4dd-2xcvn"] Nov 24 12:16:31 crc kubenswrapper[4930]: E1124 12:16:31.483006 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7764f5a-c517-4436-8f32-634d40c2ea18" containerName="init" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.483017 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7764f5a-c517-4436-8f32-634d40c2ea18" containerName="init" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.483207 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7764f5a-c517-4436-8f32-634d40c2ea18" containerName="init" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.484166 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.489045 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.506434 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69b96dd4dd-2xcvn"] Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.554793 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69679b8f55-8knvw"] Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.589245 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b7594b454-4gfnw"] Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.593912 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.602116 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b7594b454-4gfnw"] Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.612415 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khmhf\" (UniqueName: \"kubernetes.io/projected/dc1269fb-938b-4634-a683-9b0375e01915-kube-api-access-khmhf\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.612518 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-config-data\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.612580 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc1269fb-938b-4634-a683-9b0375e01915-logs\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.612601 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-scripts\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.612618 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-secret-key\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.612687 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-combined-ca-bundle\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.612724 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-tls-certs\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714180 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc1269fb-938b-4634-a683-9b0375e01915-logs\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714235 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-scripts\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714269 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-secret-key\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714339 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8851e459-770d-4a08-8b35-41e3e060608b-config-data\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714377 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8851e459-770d-4a08-8b35-41e3e060608b-combined-ca-bundle\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714403 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8851e459-770d-4a08-8b35-41e3e060608b-logs\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714434 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-combined-ca-bundle\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714588 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8851e459-770d-4a08-8b35-41e3e060608b-horizon-secret-key\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714648 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdmt\" (UniqueName: \"kubernetes.io/projected/8851e459-770d-4a08-8b35-41e3e060608b-kube-api-access-wzdmt\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714693 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-tls-certs\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714853 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khmhf\" (UniqueName: \"kubernetes.io/projected/dc1269fb-938b-4634-a683-9b0375e01915-kube-api-access-khmhf\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714879 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8851e459-770d-4a08-8b35-41e3e060608b-scripts\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.714903 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8851e459-770d-4a08-8b35-41e3e060608b-horizon-tls-certs\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.715037 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-config-data\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.715355 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-scripts\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.715476 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc1269fb-938b-4634-a683-9b0375e01915-logs\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.716423 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-config-data\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.720284 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-tls-certs\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.733252 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-combined-ca-bundle\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.733863 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-secret-key\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.755856 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khmhf\" (UniqueName: \"kubernetes.io/projected/dc1269fb-938b-4634-a683-9b0375e01915-kube-api-access-khmhf\") pod \"horizon-69b96dd4dd-2xcvn\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.802062 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.808917 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.808978 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.816857 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8851e459-770d-4a08-8b35-41e3e060608b-horizon-secret-key\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.816913 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzdmt\" (UniqueName: \"kubernetes.io/projected/8851e459-770d-4a08-8b35-41e3e060608b-kube-api-access-wzdmt\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.816996 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8851e459-770d-4a08-8b35-41e3e060608b-scripts\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.817027 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8851e459-770d-4a08-8b35-41e3e060608b-horizon-tls-certs\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.817136 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8851e459-770d-4a08-8b35-41e3e060608b-config-data\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.817168 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8851e459-770d-4a08-8b35-41e3e060608b-combined-ca-bundle\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.817196 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8851e459-770d-4a08-8b35-41e3e060608b-logs\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.818043 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8851e459-770d-4a08-8b35-41e3e060608b-logs\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.818962 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8851e459-770d-4a08-8b35-41e3e060608b-scripts\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.819501 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8851e459-770d-4a08-8b35-41e3e060608b-config-data\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.820747 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8851e459-770d-4a08-8b35-41e3e060608b-combined-ca-bundle\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.821189 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8851e459-770d-4a08-8b35-41e3e060608b-horizon-tls-certs\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.821550 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8851e459-770d-4a08-8b35-41e3e060608b-horizon-secret-key\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.839091 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzdmt\" (UniqueName: \"kubernetes.io/projected/8851e459-770d-4a08-8b35-41e3e060608b-kube-api-access-wzdmt\") pod \"horizon-7b7594b454-4gfnw\" (UID: \"8851e459-770d-4a08-8b35-41e3e060608b\") " pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:31 crc kubenswrapper[4930]: I1124 12:16:31.917928 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:16:33 crc kubenswrapper[4930]: I1124 12:16:33.539771 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:16:33 crc kubenswrapper[4930]: I1124 12:16:33.611879 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-ht7fg"] Nov 24 12:16:33 crc kubenswrapper[4930]: I1124 12:16:33.612139 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" containerName="dnsmasq-dns" containerID="cri-o://ac5e2b9c86ba50bdac4214c47e50148adab62996714929148d8dc612be306cc3" gracePeriod=10 Nov 24 12:16:34 crc kubenswrapper[4930]: I1124 12:16:34.142286 4930 generic.go:334] "Generic (PLEG): container finished" podID="0d8d8acd-7227-4a01-aa30-ece579854880" containerID="ac5e2b9c86ba50bdac4214c47e50148adab62996714929148d8dc612be306cc3" exitCode=0 Nov 24 12:16:34 crc kubenswrapper[4930]: I1124 12:16:34.142463 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" event={"ID":"0d8d8acd-7227-4a01-aa30-ece579854880","Type":"ContainerDied","Data":"ac5e2b9c86ba50bdac4214c47e50148adab62996714929148d8dc612be306cc3"} Nov 24 12:16:35 crc kubenswrapper[4930]: I1124 12:16:35.101322 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.101060 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Nov 24 12:16:40 crc kubenswrapper[4930]: E1124 12:16:40.733658 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099" Nov 24 12:16:40 crc kubenswrapper[4930]: E1124 12:16:40.734147 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdfx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-pzqxp_openstack(44fb1f8c-0796-4310-b053-8222837cfbf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:16:40 crc kubenswrapper[4930]: E1124 12:16:40.738007 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-pzqxp" podUID="44fb1f8c-0796-4310-b053-8222837cfbf2" Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.838057 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.913462 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-scripts\") pod \"b5d3d54b-300a-4420-8114-f988d5b3d951\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.913573 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-fernet-keys\") pod \"b5d3d54b-300a-4420-8114-f988d5b3d951\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.913640 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg5d6\" (UniqueName: \"kubernetes.io/projected/b5d3d54b-300a-4420-8114-f988d5b3d951-kube-api-access-zg5d6\") pod \"b5d3d54b-300a-4420-8114-f988d5b3d951\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.913664 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-config-data\") pod \"b5d3d54b-300a-4420-8114-f988d5b3d951\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.913720 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-combined-ca-bundle\") pod \"b5d3d54b-300a-4420-8114-f988d5b3d951\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.913762 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-credential-keys\") pod \"b5d3d54b-300a-4420-8114-f988d5b3d951\" (UID: \"b5d3d54b-300a-4420-8114-f988d5b3d951\") " Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.920167 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d3d54b-300a-4420-8114-f988d5b3d951-kube-api-access-zg5d6" (OuterVolumeSpecName: "kube-api-access-zg5d6") pod "b5d3d54b-300a-4420-8114-f988d5b3d951" (UID: "b5d3d54b-300a-4420-8114-f988d5b3d951"). InnerVolumeSpecName "kube-api-access-zg5d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.920517 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b5d3d54b-300a-4420-8114-f988d5b3d951" (UID: "b5d3d54b-300a-4420-8114-f988d5b3d951"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.924250 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b5d3d54b-300a-4420-8114-f988d5b3d951" (UID: "b5d3d54b-300a-4420-8114-f988d5b3d951"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.925698 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-scripts" (OuterVolumeSpecName: "scripts") pod "b5d3d54b-300a-4420-8114-f988d5b3d951" (UID: "b5d3d54b-300a-4420-8114-f988d5b3d951"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.950434 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5d3d54b-300a-4420-8114-f988d5b3d951" (UID: "b5d3d54b-300a-4420-8114-f988d5b3d951"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:40 crc kubenswrapper[4930]: I1124 12:16:40.975219 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-config-data" (OuterVolumeSpecName: "config-data") pod "b5d3d54b-300a-4420-8114-f988d5b3d951" (UID: "b5d3d54b-300a-4420-8114-f988d5b3d951"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.015910 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.015942 4930 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.015951 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg5d6\" (UniqueName: \"kubernetes.io/projected/b5d3d54b-300a-4420-8114-f988d5b3d951-kube-api-access-zg5d6\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.015962 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.015981 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.015989 4930 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5d3d54b-300a-4420-8114-f988d5b3d951-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.206635 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4ldfj" Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.207134 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4ldfj" event={"ID":"b5d3d54b-300a-4420-8114-f988d5b3d951","Type":"ContainerDied","Data":"2eb51df36b4c5dd0d08ee55eb055a38c3b3910a0a11db2536cc8aee0923377c3"} Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.207165 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eb51df36b4c5dd0d08ee55eb055a38c3b3910a0a11db2536cc8aee0923377c3" Nov 24 12:16:41 crc kubenswrapper[4930]: E1124 12:16:41.208905 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099\\\"\"" pod="openstack/placement-db-sync-pzqxp" podUID="44fb1f8c-0796-4310-b053-8222837cfbf2" Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.924452 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4ldfj"] Nov 24 12:16:41 crc kubenswrapper[4930]: I1124 12:16:41.931395 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4ldfj"] Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.021515 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5nhh8"] Nov 24 12:16:42 crc kubenswrapper[4930]: E1124 12:16:42.021992 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d3d54b-300a-4420-8114-f988d5b3d951" containerName="keystone-bootstrap" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.022010 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d3d54b-300a-4420-8114-f988d5b3d951" containerName="keystone-bootstrap" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.022236 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5d3d54b-300a-4420-8114-f988d5b3d951" containerName="keystone-bootstrap" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.022941 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.025862 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.025963 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bt94b" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.026016 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.026101 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.029014 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.031693 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5nhh8"] Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.095337 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5d3d54b-300a-4420-8114-f988d5b3d951" path="/var/lib/kubelet/pods/b5d3d54b-300a-4420-8114-f988d5b3d951/volumes" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.138800 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-scripts\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.138886 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qf7x\" (UniqueName: \"kubernetes.io/projected/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-kube-api-access-2qf7x\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.138913 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-fernet-keys\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.138933 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-credential-keys\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.138972 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-config-data\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.139107 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-combined-ca-bundle\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.241094 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-combined-ca-bundle\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.241150 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-scripts\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.241202 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qf7x\" (UniqueName: \"kubernetes.io/projected/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-kube-api-access-2qf7x\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.241225 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-fernet-keys\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.241242 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-credential-keys\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.241268 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-config-data\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.245758 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-credential-keys\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.246259 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-config-data\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.246465 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-combined-ca-bundle\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.246822 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-scripts\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.246938 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-fernet-keys\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.257635 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qf7x\" (UniqueName: \"kubernetes.io/projected/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-kube-api-access-2qf7x\") pod \"keystone-bootstrap-5nhh8\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:42 crc kubenswrapper[4930]: I1124 12:16:42.351202 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:16:45 crc kubenswrapper[4930]: I1124 12:16:45.238592 4930 generic.go:334] "Generic (PLEG): container finished" podID="3933d228-dcc6-4ce9-97ff-17a5a2d49d0f" containerID="238acbf81621feccb913f177fdbc5cc93e7434423a7af655da8b2ef55e2d92f9" exitCode=0 Nov 24 12:16:45 crc kubenswrapper[4930]: I1124 12:16:45.238686 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zfb9w" event={"ID":"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f","Type":"ContainerDied","Data":"238acbf81621feccb913f177fdbc5cc93e7434423a7af655da8b2ef55e2d92f9"} Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.100876 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.102432 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.103225 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.110179 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.122580 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.136409 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.191810 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-config\") pod \"0d8d8acd-7227-4a01-aa30-ece579854880\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.191904 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-combined-ca-bundle\") pod \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.191935 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxc48\" (UniqueName: \"kubernetes.io/projected/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-kube-api-access-qxc48\") pod \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.191968 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-internal-tls-certs\") pod \"78bef63c-9e71-4322-94ca-83b6815c2ecd\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.191987 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2tgj\" (UniqueName: \"kubernetes.io/projected/78bef63c-9e71-4322-94ca-83b6815c2ecd-kube-api-access-f2tgj\") pod \"78bef63c-9e71-4322-94ca-83b6815c2ecd\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192033 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-nb\") pod \"0d8d8acd-7227-4a01-aa30-ece579854880\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192049 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-config-data\") pod \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192081 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-scripts\") pod \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192141 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-scripts\") pod \"78bef63c-9e71-4322-94ca-83b6815c2ecd\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192166 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-svc\") pod \"0d8d8acd-7227-4a01-aa30-ece579854880\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192192 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-public-tls-certs\") pod \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192214 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-swift-storage-0\") pod \"0d8d8acd-7227-4a01-aa30-ece579854880\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192249 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-config-data\") pod \"78bef63c-9e71-4322-94ca-83b6815c2ecd\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192265 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-sb\") pod \"0d8d8acd-7227-4a01-aa30-ece579854880\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192286 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192315 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2txmg\" (UniqueName: \"kubernetes.io/projected/0d8d8acd-7227-4a01-aa30-ece579854880-kube-api-access-2txmg\") pod \"0d8d8acd-7227-4a01-aa30-ece579854880\" (UID: \"0d8d8acd-7227-4a01-aa30-ece579854880\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192340 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-config\") pod \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192360 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-httpd-run\") pod \"78bef63c-9e71-4322-94ca-83b6815c2ecd\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192378 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-logs\") pod \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192402 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lshzn\" (UniqueName: \"kubernetes.io/projected/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-kube-api-access-lshzn\") pod \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192431 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-httpd-run\") pod \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\" (UID: \"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192448 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-logs\") pod \"78bef63c-9e71-4322-94ca-83b6815c2ecd\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192496 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"78bef63c-9e71-4322-94ca-83b6815c2ecd\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192517 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-combined-ca-bundle\") pod \"78bef63c-9e71-4322-94ca-83b6815c2ecd\" (UID: \"78bef63c-9e71-4322-94ca-83b6815c2ecd\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.192539 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-combined-ca-bundle\") pod \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\" (UID: \"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f\") " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.204309 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" (UID: "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.205175 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-logs" (OuterVolumeSpecName: "logs") pod "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" (UID: "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.207801 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "78bef63c-9e71-4322-94ca-83b6815c2ecd" (UID: "78bef63c-9e71-4322-94ca-83b6815c2ecd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.217289 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-logs" (OuterVolumeSpecName: "logs") pod "78bef63c-9e71-4322-94ca-83b6815c2ecd" (UID: "78bef63c-9e71-4322-94ca-83b6815c2ecd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.228169 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "78bef63c-9e71-4322-94ca-83b6815c2ecd" (UID: "78bef63c-9e71-4322-94ca-83b6815c2ecd"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.232886 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-kube-api-access-lshzn" (OuterVolumeSpecName: "kube-api-access-lshzn") pod "3933d228-dcc6-4ce9-97ff-17a5a2d49d0f" (UID: "3933d228-dcc6-4ce9-97ff-17a5a2d49d0f"). InnerVolumeSpecName "kube-api-access-lshzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.233006 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8d8acd-7227-4a01-aa30-ece579854880-kube-api-access-2txmg" (OuterVolumeSpecName: "kube-api-access-2txmg") pod "0d8d8acd-7227-4a01-aa30-ece579854880" (UID: "0d8d8acd-7227-4a01-aa30-ece579854880"). InnerVolumeSpecName "kube-api-access-2txmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.239391 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-scripts" (OuterVolumeSpecName: "scripts") pod "78bef63c-9e71-4322-94ca-83b6815c2ecd" (UID: "78bef63c-9e71-4322-94ca-83b6815c2ecd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.239670 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" (UID: "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.247960 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-kube-api-access-qxc48" (OuterVolumeSpecName: "kube-api-access-qxc48") pod "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" (UID: "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884"). InnerVolumeSpecName "kube-api-access-qxc48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.249882 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78bef63c-9e71-4322-94ca-83b6815c2ecd-kube-api-access-f2tgj" (OuterVolumeSpecName: "kube-api-access-f2tgj") pod "78bef63c-9e71-4322-94ca-83b6815c2ecd" (UID: "78bef63c-9e71-4322-94ca-83b6815c2ecd"). InnerVolumeSpecName "kube-api-access-f2tgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.257783 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-scripts" (OuterVolumeSpecName: "scripts") pod "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" (UID: "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.283158 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" event={"ID":"0d8d8acd-7227-4a01-aa30-ece579854880","Type":"ContainerDied","Data":"8a3eba635ec981a56abefd550543690aa88204b5a46fcafedd04521236e223ff"} Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.283233 4930 scope.go:117] "RemoveContainer" containerID="ac5e2b9c86ba50bdac4214c47e50148adab62996714929148d8dc612be306cc3" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.283184 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.285774 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zfb9w" event={"ID":"3933d228-dcc6-4ce9-97ff-17a5a2d49d0f","Type":"ContainerDied","Data":"b007efcfdd76a4d37c2d71ef23b765c5eda88f96cff9adb874ea675f244c3c32"} Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.285815 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b007efcfdd76a4d37c2d71ef23b765c5eda88f96cff9adb874ea675f244c3c32" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.285873 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zfb9w" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.292763 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78bef63c-9e71-4322-94ca-83b6815c2ecd","Type":"ContainerDied","Data":"8b079d52373bcd99052bbcaca383851bfd8433ba9dcbf7f793478267787fd523"} Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.292877 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297714 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297763 4930 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297780 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2txmg\" (UniqueName: \"kubernetes.io/projected/0d8d8acd-7227-4a01-aa30-ece579854880-kube-api-access-2txmg\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297792 4930 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297806 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297818 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lshzn\" (UniqueName: \"kubernetes.io/projected/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-kube-api-access-lshzn\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297829 4930 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297839 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78bef63c-9e71-4322-94ca-83b6815c2ecd-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297864 4930 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297878 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxc48\" (UniqueName: \"kubernetes.io/projected/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-kube-api-access-qxc48\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297973 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2tgj\" (UniqueName: \"kubernetes.io/projected/78bef63c-9e71-4322-94ca-83b6815c2ecd-kube-api-access-f2tgj\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.297985 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.300141 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884","Type":"ContainerDied","Data":"574a45477bc327a12f85f914bb9f9afc4569ee4f827dbe42e6da49632e5c0ea5"} Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.300372 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.309906 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78bef63c-9e71-4322-94ca-83b6815c2ecd" (UID: "78bef63c-9e71-4322-94ca-83b6815c2ecd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.324758 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0d8d8acd-7227-4a01-aa30-ece579854880" (UID: "0d8d8acd-7227-4a01-aa30-ece579854880"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.326560 4930 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.328268 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-config" (OuterVolumeSpecName: "config") pod "3933d228-dcc6-4ce9-97ff-17a5a2d49d0f" (UID: "3933d228-dcc6-4ce9-97ff-17a5a2d49d0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.329416 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" (UID: "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.333466 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0d8d8acd-7227-4a01-aa30-ece579854880" (UID: "0d8d8acd-7227-4a01-aa30-ece579854880"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.352235 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3933d228-dcc6-4ce9-97ff-17a5a2d49d0f" (UID: "3933d228-dcc6-4ce9-97ff-17a5a2d49d0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.358294 4930 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.368747 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0d8d8acd-7227-4a01-aa30-ece579854880" (UID: "0d8d8acd-7227-4a01-aa30-ece579854880"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.370398 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d8d8acd-7227-4a01-aa30-ece579854880" (UID: "0d8d8acd-7227-4a01-aa30-ece579854880"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.371625 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-config" (OuterVolumeSpecName: "config") pod "0d8d8acd-7227-4a01-aa30-ece579854880" (UID: "0d8d8acd-7227-4a01-aa30-ece579854880"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.383707 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-config-data" (OuterVolumeSpecName: "config-data") pod "78bef63c-9e71-4322-94ca-83b6815c2ecd" (UID: "78bef63c-9e71-4322-94ca-83b6815c2ecd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.384148 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-config-data" (OuterVolumeSpecName: "config-data") pod "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" (UID: "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.399625 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400034 4930 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400054 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400068 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400081 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400093 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400104 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400114 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400136 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400146 4930 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400161 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400220 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d8d8acd-7227-4a01-aa30-ece579854880-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.400231 4930 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.403368 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" (UID: "6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.414255 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "78bef63c-9e71-4322-94ca-83b6815c2ecd" (UID: "78bef63c-9e71-4322-94ca-83b6815c2ecd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.502298 4930 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78bef63c-9e71-4322-94ca-83b6815c2ecd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.502326 4930 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.627938 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-ht7fg"] Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.647954 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-ht7fg"] Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.663619 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.678803 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.713599 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.728204 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.736685 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:16:50 crc kubenswrapper[4930]: E1124 12:16:50.737237 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3933d228-dcc6-4ce9-97ff-17a5a2d49d0f" containerName="neutron-db-sync" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737254 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="3933d228-dcc6-4ce9-97ff-17a5a2d49d0f" containerName="neutron-db-sync" Nov 24 12:16:50 crc kubenswrapper[4930]: E1124 12:16:50.737271 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" containerName="glance-log" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737279 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" containerName="glance-log" Nov 24 12:16:50 crc kubenswrapper[4930]: E1124 12:16:50.737296 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" containerName="glance-httpd" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737305 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" containerName="glance-httpd" Nov 24 12:16:50 crc kubenswrapper[4930]: E1124 12:16:50.737315 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bef63c-9e71-4322-94ca-83b6815c2ecd" containerName="glance-log" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737322 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bef63c-9e71-4322-94ca-83b6815c2ecd" containerName="glance-log" Nov 24 12:16:50 crc kubenswrapper[4930]: E1124 12:16:50.737338 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" containerName="dnsmasq-dns" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737346 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" containerName="dnsmasq-dns" Nov 24 12:16:50 crc kubenswrapper[4930]: E1124 12:16:50.737363 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" containerName="init" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737371 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" containerName="init" Nov 24 12:16:50 crc kubenswrapper[4930]: E1124 12:16:50.737385 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bef63c-9e71-4322-94ca-83b6815c2ecd" containerName="glance-httpd" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737393 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bef63c-9e71-4322-94ca-83b6815c2ecd" containerName="glance-httpd" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737861 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="78bef63c-9e71-4322-94ca-83b6815c2ecd" containerName="glance-log" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737881 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="3933d228-dcc6-4ce9-97ff-17a5a2d49d0f" containerName="neutron-db-sync" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737895 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" containerName="glance-log" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737914 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="78bef63c-9e71-4322-94ca-83b6815c2ecd" containerName="glance-httpd" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737926 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" containerName="dnsmasq-dns" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.737941 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" containerName="glance-httpd" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.752873 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.753004 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.759808 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.760177 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.760320 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.761018 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-b68t2" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.761497 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.789057 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.789226 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.791337 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.791892 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.807847 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.807911 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.807950 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-logs\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808107 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808198 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-config-data\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808286 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l47sj\" (UniqueName: \"kubernetes.io/projected/c74d03fb-686f-44a0-9132-02dd2c5d3d46-kube-api-access-l47sj\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808318 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt2v5\" (UniqueName: \"kubernetes.io/projected/f0248953-855e-4f5c-9811-b893580d90cd-kube-api-access-xt2v5\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808373 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-logs\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808400 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808476 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808524 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808589 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808616 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808702 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808815 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-scripts\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.808858 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.913324 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.913436 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.913497 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.913530 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.913658 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.913908 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.915443 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.915843 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-scripts\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.915891 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.916206 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.916262 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.916328 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-logs\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.916379 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.916424 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-config-data\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.916500 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l47sj\" (UniqueName: \"kubernetes.io/projected/c74d03fb-686f-44a0-9132-02dd2c5d3d46-kube-api-access-l47sj\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.916518 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt2v5\" (UniqueName: \"kubernetes.io/projected/f0248953-855e-4f5c-9811-b893580d90cd-kube-api-access-xt2v5\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.916591 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-logs\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.916616 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.917128 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.919108 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.919508 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-logs\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.920777 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-logs\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.921373 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.923030 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.923506 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.923518 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.926126 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-scripts\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.929788 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.936171 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-config-data\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.939963 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.942143 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l47sj\" (UniqueName: \"kubernetes.io/projected/c74d03fb-686f-44a0-9132-02dd2c5d3d46-kube-api-access-l47sj\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.944496 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt2v5\" (UniqueName: \"kubernetes.io/projected/f0248953-855e-4f5c-9811-b893580d90cd-kube-api-access-xt2v5\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.958991 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:16:50 crc kubenswrapper[4930]: I1124 12:16:50.959270 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " pod="openstack/glance-default-external-api-0" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.090210 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.119967 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.470161 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-mr8zv"] Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.472235 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.503874 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-mr8zv"] Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.533498 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-svc\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.533822 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-swift-storage-0\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.533963 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-nb\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.534078 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-config\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.534181 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p4f5\" (UniqueName: \"kubernetes.io/projected/6d5e6363-1256-4dc1-b84b-a40298dd9d39-kube-api-access-9p4f5\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.534315 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-sb\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.612039 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-575d598bfb-msnzv"] Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.614130 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.619085 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.619335 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.619583 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8xbcv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.619801 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.630623 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-575d598bfb-msnzv"] Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.638348 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-svc\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.638585 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-swift-storage-0\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.638750 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-nb\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.638856 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-config\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.638959 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-config\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.639077 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-httpd-config\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.639163 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p4f5\" (UniqueName: \"kubernetes.io/projected/6d5e6363-1256-4dc1-b84b-a40298dd9d39-kube-api-access-9p4f5\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.639314 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-sb\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.639584 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-svc\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.639646 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-swift-storage-0\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.640149 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-config\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.640381 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-nb\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.640678 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-sb\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.659816 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p4f5\" (UniqueName: \"kubernetes.io/projected/6d5e6363-1256-4dc1-b84b-a40298dd9d39-kube-api-access-9p4f5\") pod \"dnsmasq-dns-6c654c9745-mr8zv\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.741031 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-ovndb-tls-certs\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.741990 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx9rm\" (UniqueName: \"kubernetes.io/projected/54f78232-8dea-46dc-9fcd-b34fa6a4d400-kube-api-access-nx9rm\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.742456 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-config\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.743599 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-httpd-config\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.743816 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-combined-ca-bundle\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.747128 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-config\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.747299 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-httpd-config\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.799592 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.845696 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-combined-ca-bundle\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.846009 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-ovndb-tls-certs\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.846117 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx9rm\" (UniqueName: \"kubernetes.io/projected/54f78232-8dea-46dc-9fcd-b34fa6a4d400-kube-api-access-nx9rm\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.849676 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-combined-ca-bundle\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.849718 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-ovndb-tls-certs\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.866868 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx9rm\" (UniqueName: \"kubernetes.io/projected/54f78232-8dea-46dc-9fcd-b34fa6a4d400-kube-api-access-nx9rm\") pod \"neutron-575d598bfb-msnzv\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:51 crc kubenswrapper[4930]: I1124 12:16:51.941201 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:52 crc kubenswrapper[4930]: I1124 12:16:52.231096 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" path="/var/lib/kubelet/pods/0d8d8acd-7227-4a01-aa30-ece579854880/volumes" Nov 24 12:16:52 crc kubenswrapper[4930]: I1124 12:16:52.232086 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884" path="/var/lib/kubelet/pods/6d8b71e2-5e0b-4c52-b6b9-4f4caa4da884/volumes" Nov 24 12:16:52 crc kubenswrapper[4930]: I1124 12:16:52.232791 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78bef63c-9e71-4322-94ca-83b6815c2ecd" path="/var/lib/kubelet/pods/78bef63c-9e71-4322-94ca-83b6815c2ecd/volumes" Nov 24 12:16:52 crc kubenswrapper[4930]: E1124 12:16:52.366200 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879" Nov 24 12:16:52 crc kubenswrapper[4930]: E1124 12:16:52.366363 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g62nv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rcfd6_openstack(9169db1f-c94f-45a3-bc97-6ad40d17b7d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:16:52 crc kubenswrapper[4930]: E1124 12:16:52.367491 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rcfd6" podUID="9169db1f-c94f-45a3-bc97-6ad40d17b7d1" Nov 24 12:16:52 crc kubenswrapper[4930]: I1124 12:16:52.389523 4930 scope.go:117] "RemoveContainer" containerID="a9fbdcc557f9d97dcd33d58f732b5714cebea9c2c676724340b71c9d180d5bec" Nov 24 12:16:52 crc kubenswrapper[4930]: I1124 12:16:52.570409 4930 scope.go:117] "RemoveContainer" containerID="6f7648c425d4f0cd816a773566da1f9519c4f58668e1e00c92d06218f99f14d8" Nov 24 12:16:52 crc kubenswrapper[4930]: I1124 12:16:52.679451 4930 scope.go:117] "RemoveContainer" containerID="e5e39d826bd8bc27461b4728d5bd01ee4f94182a323943518f25330da3ebfe11" Nov 24 12:16:52 crc kubenswrapper[4930]: I1124 12:16:52.782077 4930 scope.go:117] "RemoveContainer" containerID="b6ad1815a5ae0bf311b7cded1a10169fdfcd1539e9c3b8e1da7e387ad44265b2" Nov 24 12:16:52 crc kubenswrapper[4930]: I1124 12:16:52.848812 4930 scope.go:117] "RemoveContainer" containerID="87f370abca673d2f0663176501108ee245f8be019fe0986f466cea51e425e4ef" Nov 24 12:16:52 crc kubenswrapper[4930]: I1124 12:16:52.883135 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69b96dd4dd-2xcvn"] Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.109187 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b7594b454-4gfnw"] Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.123012 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5nhh8"] Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.349255 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:16:53 crc kubenswrapper[4930]: W1124 12:16:53.352702 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc74d03fb_686f_44a0_9132_02dd2c5d3d46.slice/crio-c94a3cd6beb3a1539344aaedc849ece766e05111cdbb762e6ed228bce00d37c3 WatchSource:0}: Error finding container c94a3cd6beb3a1539344aaedc849ece766e05111cdbb762e6ed228bce00d37c3: Status 404 returned error can't find the container with id c94a3cd6beb3a1539344aaedc849ece766e05111cdbb762e6ed228bce00d37c3 Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.380378 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56d56cd8f5-hxqgp" event={"ID":"cbd922a6-f938-478a-8db2-d99dc37f3a69","Type":"ContainerStarted","Data":"7a9872195563ac3837b27c862b5fc468d87289f1bf166b50477b2445fe494f1e"} Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.382889 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7594b454-4gfnw" event={"ID":"8851e459-770d-4a08-8b35-41e3e060608b","Type":"ContainerStarted","Data":"a00ae20d38dfba5e09619b8b42a8bcefcbf99ac95daed135d45752be4beb9cc6"} Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.390642 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69b96dd4dd-2xcvn" event={"ID":"dc1269fb-938b-4634-a683-9b0375e01915","Type":"ContainerStarted","Data":"a8f420061cd09591125744b746e129818673889aba214fb24b5bd517be9125c0"} Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.401518 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-mr8zv"] Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.404882 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-drnpc" event={"ID":"35d6db25-381f-4f83-a033-984addf8da0d","Type":"ContainerStarted","Data":"682b3369b40866db1c4d5dd05390f9c51695077c1147c17303a5724fff51c51c"} Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.413318 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5nhh8" event={"ID":"8f75b82b-237c-4bcd-9bd4-8e72a43204aa","Type":"ContainerStarted","Data":"54091ffafde0087e39113c398cdaf54f30e80190b153df4d1bacd90576cbcdc4"} Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.415037 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69679b8f55-8knvw" event={"ID":"9ecc66ca-44d6-4220-9f1c-2b054239f484","Type":"ContainerStarted","Data":"95f3b7d02d3bc3c5dda4eb0b5d5bca10ac73478e4b03a3f2b832230e85f4141b"} Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.426615 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-drnpc" podStartSLOduration=4.484235474 podStartE2EDuration="31.426596694s" podCreationTimestamp="2025-11-24 12:16:22 +0000 UTC" firstStartedPulling="2025-11-24 12:16:25.418607906 +0000 UTC m=+1032.032935856" lastFinishedPulling="2025-11-24 12:16:52.360969126 +0000 UTC m=+1058.975297076" observedRunningTime="2025-11-24 12:16:53.420012104 +0000 UTC m=+1060.034340064" watchObservedRunningTime="2025-11-24 12:16:53.426596694 +0000 UTC m=+1060.040924644" Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.439269 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6dcc7c9c-rcxf9" event={"ID":"dced9a55-e50d-4a84-8876-25e6981347f8","Type":"ContainerStarted","Data":"c724a9494e3a0adb1b5041991cef78607bc76e53098c5791432371f0b8a38e73"} Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.445573 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9","Type":"ContainerStarted","Data":"1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb"} Nov 24 12:16:53 crc kubenswrapper[4930]: E1124 12:16:53.447306 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879\\\"\"" pod="openstack/cinder-db-sync-rcfd6" podUID="9169db1f-c94f-45a3-bc97-6ad40d17b7d1" Nov 24 12:16:53 crc kubenswrapper[4930]: W1124 12:16:53.479625 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d5e6363_1256_4dc1_b84b_a40298dd9d39.slice/crio-80a5cde113a2d55b63db5996ab48a27272f4105ecde030b7cea331a481157f5c WatchSource:0}: Error finding container 80a5cde113a2d55b63db5996ab48a27272f4105ecde030b7cea331a481157f5c: Status 404 returned error can't find the container with id 80a5cde113a2d55b63db5996ab48a27272f4105ecde030b7cea331a481157f5c Nov 24 12:16:53 crc kubenswrapper[4930]: I1124 12:16:53.498606 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-575d598bfb-msnzv"] Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.183648 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.377819 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-785757c67f-sl8rq"] Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.380298 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.383228 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.383455 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.390092 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-785757c67f-sl8rq"] Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.410405 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-ovndb-tls-certs\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.410464 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-internal-tls-certs\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.410603 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-httpd-config\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.410629 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrt7x\" (UniqueName: \"kubernetes.io/projected/c3722de2-f333-4130-97bb-d2377fc9052f-kube-api-access-mrt7x\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.410677 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-config\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.410707 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-combined-ca-bundle\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.410762 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-public-tls-certs\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.480431 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575d598bfb-msnzv" event={"ID":"54f78232-8dea-46dc-9fcd-b34fa6a4d400","Type":"ContainerStarted","Data":"6a7f3c5a3e7221f7721a273f796e214c82181a6e5e5f0424692ccbf0f35c3692"} Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.482450 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7594b454-4gfnw" event={"ID":"8851e459-770d-4a08-8b35-41e3e060608b","Type":"ContainerStarted","Data":"234954ebe84cf544eeefa329e0362942ea511ed59f69b5e9da9329f659ab2b7f"} Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.484862 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69b96dd4dd-2xcvn" event={"ID":"dc1269fb-938b-4634-a683-9b0375e01915","Type":"ContainerStarted","Data":"8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90"} Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.489872 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5nhh8" event={"ID":"8f75b82b-237c-4bcd-9bd4-8e72a43204aa","Type":"ContainerStarted","Data":"18dbb96e3bc811431ec23dfee6de196b683ceec9debc3669650b9c188ec25d59"} Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.495805 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69679b8f55-8knvw" event={"ID":"9ecc66ca-44d6-4220-9f1c-2b054239f484","Type":"ContainerStarted","Data":"81839e0c0639b58d676fca72d2b02c94deeff5bd06adc2f682f8411e51fd2ca0"} Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.495967 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69679b8f55-8knvw" podUID="9ecc66ca-44d6-4220-9f1c-2b054239f484" containerName="horizon-log" containerID="cri-o://95f3b7d02d3bc3c5dda4eb0b5d5bca10ac73478e4b03a3f2b832230e85f4141b" gracePeriod=30 Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.496069 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69679b8f55-8knvw" podUID="9ecc66ca-44d6-4220-9f1c-2b054239f484" containerName="horizon" containerID="cri-o://81839e0c0639b58d676fca72d2b02c94deeff5bd06adc2f682f8411e51fd2ca0" gracePeriod=30 Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.506908 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c74d03fb-686f-44a0-9132-02dd2c5d3d46","Type":"ContainerStarted","Data":"c94a3cd6beb3a1539344aaedc849ece766e05111cdbb762e6ed228bce00d37c3"} Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.512850 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-config\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.512914 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-combined-ca-bundle\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.513033 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-public-tls-certs\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.516510 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-ovndb-tls-certs\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.516580 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-internal-tls-certs\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.516809 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-httpd-config\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.516849 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrt7x\" (UniqueName: \"kubernetes.io/projected/c3722de2-f333-4130-97bb-d2377fc9052f-kube-api-access-mrt7x\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.519102 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5nhh8" podStartSLOduration=13.519085333 podStartE2EDuration="13.519085333s" podCreationTimestamp="2025-11-24 12:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:54.509885809 +0000 UTC m=+1061.124213769" watchObservedRunningTime="2025-11-24 12:16:54.519085333 +0000 UTC m=+1061.133413283" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.522921 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-combined-ca-bundle\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.530023 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-internal-tls-certs\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.534245 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-public-tls-certs\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.535428 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-config\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.548012 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-httpd-config\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.548318 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56d56cd8f5-hxqgp" event={"ID":"cbd922a6-f938-478a-8db2-d99dc37f3a69","Type":"ContainerStarted","Data":"83e219941bf1309705d62b99197b40b292c80ca3cd3ed7de43869f46826a3910"} Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.548576 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrt7x\" (UniqueName: \"kubernetes.io/projected/c3722de2-f333-4130-97bb-d2377fc9052f-kube-api-access-mrt7x\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.548764 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56d56cd8f5-hxqgp" podUID="cbd922a6-f938-478a-8db2-d99dc37f3a69" containerName="horizon-log" containerID="cri-o://7a9872195563ac3837b27c862b5fc468d87289f1bf166b50477b2445fe494f1e" gracePeriod=30 Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.548924 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56d56cd8f5-hxqgp" podUID="cbd922a6-f938-478a-8db2-d99dc37f3a69" containerName="horizon" containerID="cri-o://83e219941bf1309705d62b99197b40b292c80ca3cd3ed7de43869f46826a3910" gracePeriod=30 Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.549636 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3722de2-f333-4130-97bb-d2377fc9052f-ovndb-tls-certs\") pod \"neutron-785757c67f-sl8rq\" (UID: \"c3722de2-f333-4130-97bb-d2377fc9052f\") " pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.553479 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69679b8f55-8knvw" podStartSLOduration=3.707189857 podStartE2EDuration="30.553465372s" podCreationTimestamp="2025-11-24 12:16:24 +0000 UTC" firstStartedPulling="2025-11-24 12:16:25.631779038 +0000 UTC m=+1032.246106988" lastFinishedPulling="2025-11-24 12:16:52.478054553 +0000 UTC m=+1059.092382503" observedRunningTime="2025-11-24 12:16:54.547774259 +0000 UTC m=+1061.162102219" watchObservedRunningTime="2025-11-24 12:16:54.553465372 +0000 UTC m=+1061.167793322" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.565866 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f0248953-855e-4f5c-9811-b893580d90cd","Type":"ContainerStarted","Data":"0f4f9e12810db403e0379972e528968fbfa7a6e2720b6040227f6f7b80d613e3"} Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.580836 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" event={"ID":"6d5e6363-1256-4dc1-b84b-a40298dd9d39","Type":"ContainerStarted","Data":"80a5cde113a2d55b63db5996ab48a27272f4105ecde030b7cea331a481157f5c"} Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.582455 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56d56cd8f5-hxqgp" podStartSLOduration=7.10331621 podStartE2EDuration="31.582433845s" podCreationTimestamp="2025-11-24 12:16:23 +0000 UTC" firstStartedPulling="2025-11-24 12:16:25.51607006 +0000 UTC m=+1032.130398010" lastFinishedPulling="2025-11-24 12:16:49.995187685 +0000 UTC m=+1056.609515645" observedRunningTime="2025-11-24 12:16:54.578445531 +0000 UTC m=+1061.192773481" watchObservedRunningTime="2025-11-24 12:16:54.582433845 +0000 UTC m=+1061.196761795" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.589876 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b6dcc7c9c-rcxf9" podUID="dced9a55-e50d-4a84-8876-25e6981347f8" containerName="horizon-log" containerID="cri-o://c724a9494e3a0adb1b5041991cef78607bc76e53098c5791432371f0b8a38e73" gracePeriod=30 Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.590182 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6dcc7c9c-rcxf9" event={"ID":"dced9a55-e50d-4a84-8876-25e6981347f8","Type":"ContainerStarted","Data":"bee4a24e4de09e86532ea06bb6bc0154e5cec307ba7511473076aded1752631f"} Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.590478 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b6dcc7c9c-rcxf9" podUID="dced9a55-e50d-4a84-8876-25e6981347f8" containerName="horizon" containerID="cri-o://bee4a24e4de09e86532ea06bb6bc0154e5cec307ba7511473076aded1752631f" gracePeriod=30 Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.649558 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b6dcc7c9c-rcxf9" podStartSLOduration=4.910966432 podStartE2EDuration="32.649521285s" podCreationTimestamp="2025-11-24 12:16:22 +0000 UTC" firstStartedPulling="2025-11-24 12:16:24.727568247 +0000 UTC m=+1031.341896197" lastFinishedPulling="2025-11-24 12:16:52.4661231 +0000 UTC m=+1059.080451050" observedRunningTime="2025-11-24 12:16:54.632037292 +0000 UTC m=+1061.246365242" watchObservedRunningTime="2025-11-24 12:16:54.649521285 +0000 UTC m=+1061.263849235" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.750477 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:54 crc kubenswrapper[4930]: I1124 12:16:54.982604 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.106745 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6856c564b9-ht7fg" podUID="0d8d8acd-7227-4a01-aa30-ece579854880" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.467922 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-785757c67f-sl8rq"] Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.600090 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f0248953-855e-4f5c-9811-b893580d90cd","Type":"ContainerStarted","Data":"5a8fa8bc9d5f4ca0d5e698e0812e92307719280a80a069cb7ab6620d8da8441d"} Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.601085 4930 generic.go:334] "Generic (PLEG): container finished" podID="6d5e6363-1256-4dc1-b84b-a40298dd9d39" containerID="2e8d46714b60a6f25a6b7dd5f076fa9710476b72a47b6f357fefbcdbb841c623" exitCode=0 Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.601123 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" event={"ID":"6d5e6363-1256-4dc1-b84b-a40298dd9d39","Type":"ContainerDied","Data":"2e8d46714b60a6f25a6b7dd5f076fa9710476b72a47b6f357fefbcdbb841c623"} Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.611149 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c74d03fb-686f-44a0-9132-02dd2c5d3d46","Type":"ContainerStarted","Data":"7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729"} Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.617409 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575d598bfb-msnzv" event={"ID":"54f78232-8dea-46dc-9fcd-b34fa6a4d400","Type":"ContainerStarted","Data":"f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a"} Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.617456 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575d598bfb-msnzv" event={"ID":"54f78232-8dea-46dc-9fcd-b34fa6a4d400","Type":"ContainerStarted","Data":"84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f"} Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.618008 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.620018 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7594b454-4gfnw" event={"ID":"8851e459-770d-4a08-8b35-41e3e060608b","Type":"ContainerStarted","Data":"2ff7423fbf8bb9be37b6ba742765131101ae5764efaa45f6118a07d40a87e7f3"} Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.646560 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-575d598bfb-msnzv" podStartSLOduration=4.646518439 podStartE2EDuration="4.646518439s" podCreationTimestamp="2025-11-24 12:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:55.646012365 +0000 UTC m=+1062.260340345" watchObservedRunningTime="2025-11-24 12:16:55.646518439 +0000 UTC m=+1062.260846389" Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.660806 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69b96dd4dd-2xcvn" event={"ID":"dc1269fb-938b-4634-a683-9b0375e01915","Type":"ContainerStarted","Data":"1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f"} Nov 24 12:16:55 crc kubenswrapper[4930]: I1124 12:16:55.682719 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b7594b454-4gfnw" podStartSLOduration=24.68270224 podStartE2EDuration="24.68270224s" podCreationTimestamp="2025-11-24 12:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:55.673189566 +0000 UTC m=+1062.287517516" watchObservedRunningTime="2025-11-24 12:16:55.68270224 +0000 UTC m=+1062.297030190" Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.679516 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" event={"ID":"6d5e6363-1256-4dc1-b84b-a40298dd9d39","Type":"ContainerStarted","Data":"aa63a2b748b51592eb2142780d3fb2b06e0da28395faa63243c01c8b137b35e6"} Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.681330 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.683422 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-785757c67f-sl8rq" event={"ID":"c3722de2-f333-4130-97bb-d2377fc9052f","Type":"ContainerStarted","Data":"d89407e8c955ff9d3de9887cac0d8c1effdfd0f9fd89ba9a692189a7e0eb693d"} Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.685240 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-785757c67f-sl8rq" event={"ID":"c3722de2-f333-4130-97bb-d2377fc9052f","Type":"ContainerStarted","Data":"95be43455b5783303fa0b49001afc036d87c12fa1e72ed757653b5788faa05b4"} Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.689591 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pzqxp" event={"ID":"44fb1f8c-0796-4310-b053-8222837cfbf2","Type":"ContainerStarted","Data":"4a44ef7b109c6ccd359bbdb8ed3e9bf626ae274ff45ade5597326ee672520e40"} Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.709318 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69b96dd4dd-2xcvn" podStartSLOduration=25.709298085 podStartE2EDuration="25.709298085s" podCreationTimestamp="2025-11-24 12:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:55.698602247 +0000 UTC m=+1062.312930187" watchObservedRunningTime="2025-11-24 12:16:56.709298085 +0000 UTC m=+1063.323626035" Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.716459 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" podStartSLOduration=5.716438381 podStartE2EDuration="5.716438381s" podCreationTimestamp="2025-11-24 12:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:56.714064362 +0000 UTC m=+1063.328392312" watchObservedRunningTime="2025-11-24 12:16:56.716438381 +0000 UTC m=+1063.330766331" Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.735987 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9","Type":"ContainerStarted","Data":"14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225"} Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.750326 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-pzqxp" podStartSLOduration=4.207151236 podStartE2EDuration="34.750305425s" podCreationTimestamp="2025-11-24 12:16:22 +0000 UTC" firstStartedPulling="2025-11-24 12:16:25.492276145 +0000 UTC m=+1032.106604085" lastFinishedPulling="2025-11-24 12:16:56.035430314 +0000 UTC m=+1062.649758274" observedRunningTime="2025-11-24 12:16:56.72998362 +0000 UTC m=+1063.344311570" watchObservedRunningTime="2025-11-24 12:16:56.750305425 +0000 UTC m=+1063.364633365" Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.759115 4930 generic.go:334] "Generic (PLEG): container finished" podID="35d6db25-381f-4f83-a033-984addf8da0d" containerID="682b3369b40866db1c4d5dd05390f9c51695077c1147c17303a5724fff51c51c" exitCode=0 Nov 24 12:16:56 crc kubenswrapper[4930]: I1124 12:16:56.759847 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-drnpc" event={"ID":"35d6db25-381f-4f83-a033-984addf8da0d","Type":"ContainerDied","Data":"682b3369b40866db1c4d5dd05390f9c51695077c1147c17303a5724fff51c51c"} Nov 24 12:16:57 crc kubenswrapper[4930]: I1124 12:16:57.803817 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c74d03fb-686f-44a0-9132-02dd2c5d3d46","Type":"ContainerStarted","Data":"f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322"} Nov 24 12:16:57 crc kubenswrapper[4930]: I1124 12:16:57.809219 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-785757c67f-sl8rq" event={"ID":"c3722de2-f333-4130-97bb-d2377fc9052f","Type":"ContainerStarted","Data":"98c22f8f5d0a88981871f19d66c66bb6f93f273616b137ae468e6ffa5eff6103"} Nov 24 12:16:57 crc kubenswrapper[4930]: I1124 12:16:57.810443 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:16:57 crc kubenswrapper[4930]: I1124 12:16:57.823549 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f0248953-855e-4f5c-9811-b893580d90cd","Type":"ContainerStarted","Data":"c1aad06afd7954be4c177cdb6633ffc0943fad57338d34a101c37fcbe3c54083"} Nov 24 12:16:57 crc kubenswrapper[4930]: I1124 12:16:57.835808 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.835730332 podStartE2EDuration="7.835730332s" podCreationTimestamp="2025-11-24 12:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:57.832421287 +0000 UTC m=+1064.446749237" watchObservedRunningTime="2025-11-24 12:16:57.835730332 +0000 UTC m=+1064.450058282" Nov 24 12:16:57 crc kubenswrapper[4930]: I1124 12:16:57.885014 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.884994779 podStartE2EDuration="7.884994779s" podCreationTimestamp="2025-11-24 12:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:57.87355297 +0000 UTC m=+1064.487880910" watchObservedRunningTime="2025-11-24 12:16:57.884994779 +0000 UTC m=+1064.499322729" Nov 24 12:16:57 crc kubenswrapper[4930]: I1124 12:16:57.905011 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-785757c67f-sl8rq" podStartSLOduration=3.904989014 podStartE2EDuration="3.904989014s" podCreationTimestamp="2025-11-24 12:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:16:57.901204355 +0000 UTC m=+1064.515532325" watchObservedRunningTime="2025-11-24 12:16:57.904989014 +0000 UTC m=+1064.519316974" Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.437464 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.555015 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-combined-ca-bundle\") pod \"35d6db25-381f-4f83-a033-984addf8da0d\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.555066 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-db-sync-config-data\") pod \"35d6db25-381f-4f83-a033-984addf8da0d\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.555183 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4pb4\" (UniqueName: \"kubernetes.io/projected/35d6db25-381f-4f83-a033-984addf8da0d-kube-api-access-q4pb4\") pod \"35d6db25-381f-4f83-a033-984addf8da0d\" (UID: \"35d6db25-381f-4f83-a033-984addf8da0d\") " Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.582773 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d6db25-381f-4f83-a033-984addf8da0d-kube-api-access-q4pb4" (OuterVolumeSpecName: "kube-api-access-q4pb4") pod "35d6db25-381f-4f83-a033-984addf8da0d" (UID: "35d6db25-381f-4f83-a033-984addf8da0d"). InnerVolumeSpecName "kube-api-access-q4pb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.589112 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "35d6db25-381f-4f83-a033-984addf8da0d" (UID: "35d6db25-381f-4f83-a033-984addf8da0d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.608845 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35d6db25-381f-4f83-a033-984addf8da0d" (UID: "35d6db25-381f-4f83-a033-984addf8da0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.658435 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.658475 4930 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35d6db25-381f-4f83-a033-984addf8da0d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.658491 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4pb4\" (UniqueName: \"kubernetes.io/projected/35d6db25-381f-4f83-a033-984addf8da0d-kube-api-access-q4pb4\") on node \"crc\" DevicePath \"\"" Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.836201 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-drnpc" Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.836212 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-drnpc" event={"ID":"35d6db25-381f-4f83-a033-984addf8da0d","Type":"ContainerDied","Data":"ad22efb08110d1ec71e9431dfa64d735532133a6fb490533b95331ec6fcf3393"} Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.836268 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad22efb08110d1ec71e9431dfa64d735532133a6fb490533b95331ec6fcf3393" Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.838410 4930 generic.go:334] "Generic (PLEG): container finished" podID="8f75b82b-237c-4bcd-9bd4-8e72a43204aa" containerID="18dbb96e3bc811431ec23dfee6de196b683ceec9debc3669650b9c188ec25d59" exitCode=0 Nov 24 12:16:58 crc kubenswrapper[4930]: I1124 12:16:58.838584 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5nhh8" event={"ID":"8f75b82b-237c-4bcd-9bd4-8e72a43204aa","Type":"ContainerDied","Data":"18dbb96e3bc811431ec23dfee6de196b683ceec9debc3669650b9c188ec25d59"} Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.076367 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-b86dc847c-csn2f"] Nov 24 12:16:59 crc kubenswrapper[4930]: E1124 12:16:59.076790 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d6db25-381f-4f83-a033-984addf8da0d" containerName="barbican-db-sync" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.076819 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d6db25-381f-4f83-a033-984addf8da0d" containerName="barbican-db-sync" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.076987 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="35d6db25-381f-4f83-a033-984addf8da0d" containerName="barbican-db-sync" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.077868 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.081228 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-98lm4" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.081471 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.081633 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.125661 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-b86dc847c-csn2f"] Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.157317 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-86bf5c4cf6-tbptj"] Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.159278 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.163577 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.165471 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86bf5c4cf6-tbptj"] Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.184714 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a583517-6311-464a-b855-2a2d1e788461-logs\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.184807 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmcdw\" (UniqueName: \"kubernetes.io/projected/4a583517-6311-464a-b855-2a2d1e788461-kube-api-access-dmcdw\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.184856 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a583517-6311-464a-b855-2a2d1e788461-config-data-custom\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.184904 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a583517-6311-464a-b855-2a2d1e788461-config-data\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.184933 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a583517-6311-464a-b855-2a2d1e788461-combined-ca-bundle\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.227672 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-mr8zv"] Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.250569 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-2m4rx"] Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.252607 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.279612 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-2m4rx"] Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.288583 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmcdw\" (UniqueName: \"kubernetes.io/projected/4a583517-6311-464a-b855-2a2d1e788461-kube-api-access-dmcdw\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.288903 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbfp\" (UniqueName: \"kubernetes.io/projected/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-kube-api-access-6vbfp\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.289054 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a583517-6311-464a-b855-2a2d1e788461-config-data-custom\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.289199 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a583517-6311-464a-b855-2a2d1e788461-config-data\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.289267 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-config-data-custom\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.289348 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-config-data\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.289471 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a583517-6311-464a-b855-2a2d1e788461-combined-ca-bundle\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.289560 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-logs\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.289648 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-combined-ca-bundle\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.289860 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a583517-6311-464a-b855-2a2d1e788461-logs\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.290751 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a583517-6311-464a-b855-2a2d1e788461-logs\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.301277 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a583517-6311-464a-b855-2a2d1e788461-config-data-custom\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.302087 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a583517-6311-464a-b855-2a2d1e788461-combined-ca-bundle\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.312895 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a583517-6311-464a-b855-2a2d1e788461-config-data\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.365314 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmcdw\" (UniqueName: \"kubernetes.io/projected/4a583517-6311-464a-b855-2a2d1e788461-kube-api-access-dmcdw\") pod \"barbican-worker-b86dc847c-csn2f\" (UID: \"4a583517-6311-464a-b855-2a2d1e788461\") " pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.392388 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74dfb57686-drx5k"] Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.392595 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.393906 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79nrw\" (UniqueName: \"kubernetes.io/projected/9299ea16-3ac9-4356-916d-663e04e08206-kube-api-access-79nrw\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.394179 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-config\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.394325 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-config-data-custom\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.394411 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-config-data\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.394496 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-logs\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.394583 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-combined-ca-bundle\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.394756 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.394821 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.394881 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vbfp\" (UniqueName: \"kubernetes.io/projected/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-kube-api-access-6vbfp\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.394960 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-svc\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.402359 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-logs\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.403071 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.410204 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-config-data-custom\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.410783 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.423416 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-combined-ca-bundle\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.424944 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-config-data\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.429453 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vbfp\" (UniqueName: \"kubernetes.io/projected/b24c3d9b-ee6d-47ef-9391-91a395edbfbd-kube-api-access-6vbfp\") pod \"barbican-keystone-listener-86bf5c4cf6-tbptj\" (UID: \"b24c3d9b-ee6d-47ef-9391-91a395edbfbd\") " pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.430429 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-b86dc847c-csn2f" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.464090 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74dfb57686-drx5k"] Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.498664 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.498722 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.498778 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-svc\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.498813 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.498833 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79nrw\" (UniqueName: \"kubernetes.io/projected/9299ea16-3ac9-4356-916d-663e04e08206-kube-api-access-79nrw\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.498865 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data-custom\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.498887 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6e227e-e6f0-4244-80cf-67e85867d21f-logs\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.498911 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-config\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.498948 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.498967 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnggp\" (UniqueName: \"kubernetes.io/projected/dc6e227e-e6f0-4244-80cf-67e85867d21f-kube-api-access-bnggp\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.499017 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-combined-ca-bundle\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.501663 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-svc\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.504737 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-config\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.507363 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.508441 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.510638 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.527474 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79nrw\" (UniqueName: \"kubernetes.io/projected/9299ea16-3ac9-4356-916d-663e04e08206-kube-api-access-79nrw\") pod \"dnsmasq-dns-5cc67f459c-2m4rx\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.533228 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.590919 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.603996 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.604227 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnggp\" (UniqueName: \"kubernetes.io/projected/dc6e227e-e6f0-4244-80cf-67e85867d21f-kube-api-access-bnggp\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.604292 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-combined-ca-bundle\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.604408 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data-custom\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.604431 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6e227e-e6f0-4244-80cf-67e85867d21f-logs\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.606599 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6e227e-e6f0-4244-80cf-67e85867d21f-logs\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.610832 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.616214 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-combined-ca-bundle\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.622402 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnggp\" (UniqueName: \"kubernetes.io/projected/dc6e227e-e6f0-4244-80cf-67e85867d21f-kube-api-access-bnggp\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.624488 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data-custom\") pod \"barbican-api-74dfb57686-drx5k\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.860696 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:16:59 crc kubenswrapper[4930]: I1124 12:16:59.874038 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" podUID="6d5e6363-1256-4dc1-b84b-a40298dd9d39" containerName="dnsmasq-dns" containerID="cri-o://aa63a2b748b51592eb2142780d3fb2b06e0da28395faa63243c01c8b137b35e6" gracePeriod=10 Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.173166 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-b86dc847c-csn2f"] Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.190778 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86bf5c4cf6-tbptj"] Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.498687 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-2m4rx"] Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.793216 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.853477 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74dfb57686-drx5k"] Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.877092 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-fernet-keys\") pod \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.879041 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-combined-ca-bundle\") pod \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.879186 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-config-data\") pod \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.879323 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-scripts\") pod \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.879567 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-credential-keys\") pod \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.879682 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qf7x\" (UniqueName: \"kubernetes.io/projected/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-kube-api-access-2qf7x\") pod \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\" (UID: \"8f75b82b-237c-4bcd-9bd4-8e72a43204aa\") " Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.883233 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8f75b82b-237c-4bcd-9bd4-8e72a43204aa" (UID: "8f75b82b-237c-4bcd-9bd4-8e72a43204aa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.907930 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-scripts" (OuterVolumeSpecName: "scripts") pod "8f75b82b-237c-4bcd-9bd4-8e72a43204aa" (UID: "8f75b82b-237c-4bcd-9bd4-8e72a43204aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.910360 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-kube-api-access-2qf7x" (OuterVolumeSpecName: "kube-api-access-2qf7x") pod "8f75b82b-237c-4bcd-9bd4-8e72a43204aa" (UID: "8f75b82b-237c-4bcd-9bd4-8e72a43204aa"). InnerVolumeSpecName "kube-api-access-2qf7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.969233 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8f75b82b-237c-4bcd-9bd4-8e72a43204aa" (UID: "8f75b82b-237c-4bcd-9bd4-8e72a43204aa"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.985325 4930 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.985570 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qf7x\" (UniqueName: \"kubernetes.io/projected/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-kube-api-access-2qf7x\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.985673 4930 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:00 crc kubenswrapper[4930]: I1124 12:17:00.985746 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.042453 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7d65b7d547-xbx74"] Nov 24 12:17:01 crc kubenswrapper[4930]: E1124 12:17:01.043458 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f75b82b-237c-4bcd-9bd4-8e72a43204aa" containerName="keystone-bootstrap" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.043582 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f75b82b-237c-4bcd-9bd4-8e72a43204aa" containerName="keystone-bootstrap" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.043888 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f75b82b-237c-4bcd-9bd4-8e72a43204aa" containerName="keystone-bootstrap" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.048057 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.054767 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.059669 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5nhh8" event={"ID":"8f75b82b-237c-4bcd-9bd4-8e72a43204aa","Type":"ContainerDied","Data":"54091ffafde0087e39113c398cdaf54f30e80190b153df4d1bacd90576cbcdc4"} Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.059877 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54091ffafde0087e39113c398cdaf54f30e80190b153df4d1bacd90576cbcdc4" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.060015 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5nhh8" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.063775 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.065097 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" event={"ID":"b24c3d9b-ee6d-47ef-9391-91a395edbfbd","Type":"ContainerStarted","Data":"9ed2bafcc9332e58df613f8aba3bcdfdcace106418ec9caa97e8d9089573afa3"} Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.072293 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d65b7d547-xbx74"] Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.080033 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" event={"ID":"9299ea16-3ac9-4356-916d-663e04e08206","Type":"ContainerStarted","Data":"3cb628c68bb5f86b49a65acda8649f9e6a280118822ec456d5e4492b1016f8af"} Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.093662 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.094258 4930 generic.go:334] "Generic (PLEG): container finished" podID="6d5e6363-1256-4dc1-b84b-a40298dd9d39" containerID="aa63a2b748b51592eb2142780d3fb2b06e0da28395faa63243c01c8b137b35e6" exitCode=0 Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.095582 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.095709 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" event={"ID":"6d5e6363-1256-4dc1-b84b-a40298dd9d39","Type":"ContainerDied","Data":"aa63a2b748b51592eb2142780d3fb2b06e0da28395faa63243c01c8b137b35e6"} Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.120440 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b86dc847c-csn2f" event={"ID":"4a583517-6311-464a-b855-2a2d1e788461","Type":"ContainerStarted","Data":"940216d0d50f526b3b49084ee5e9df024a60276c2f5e88621f83d600ff9f3c3d"} Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.120658 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.120686 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.149397 4930 generic.go:334] "Generic (PLEG): container finished" podID="44fb1f8c-0796-4310-b053-8222837cfbf2" containerID="4a44ef7b109c6ccd359bbdb8ed3e9bf626ae274ff45ade5597326ee672520e40" exitCode=0 Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.149499 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pzqxp" event={"ID":"44fb1f8c-0796-4310-b053-8222837cfbf2","Type":"ContainerDied","Data":"4a44ef7b109c6ccd359bbdb8ed3e9bf626ae274ff45ade5597326ee672520e40"} Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.158372 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dfb57686-drx5k" event={"ID":"dc6e227e-e6f0-4244-80cf-67e85867d21f","Type":"ContainerStarted","Data":"ae29de56d969ed151bb8105d6a40693f240ad53a2a94ca03f1354480e0b046ab"} Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.159623 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f75b82b-237c-4bcd-9bd4-8e72a43204aa" (UID: "8f75b82b-237c-4bcd-9bd4-8e72a43204aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.190754 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-combined-ca-bundle\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.190828 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-fernet-keys\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.191366 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-internal-tls-certs\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.191465 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-scripts\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.191523 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgwk\" (UniqueName: \"kubernetes.io/projected/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-kube-api-access-qdgwk\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.191637 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-public-tls-certs\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.191712 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-credential-keys\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.191826 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-config-data\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.192031 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.203525 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.224220 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.230759 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-config-data" (OuterVolumeSpecName: "config-data") pod "8f75b82b-237c-4bcd-9bd4-8e72a43204aa" (UID: "8f75b82b-237c-4bcd-9bd4-8e72a43204aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.233878 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.234036 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.309647 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.316285 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-swift-storage-0\") pod \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.316428 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p4f5\" (UniqueName: \"kubernetes.io/projected/6d5e6363-1256-4dc1-b84b-a40298dd9d39-kube-api-access-9p4f5\") pod \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.316461 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-nb\") pod \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.316614 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-svc\") pod \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.316646 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-sb\") pod \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.316681 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-config\") pod \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\" (UID: \"6d5e6363-1256-4dc1-b84b-a40298dd9d39\") " Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.317020 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-combined-ca-bundle\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.317047 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-fernet-keys\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.317091 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-internal-tls-certs\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.317126 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-scripts\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.317155 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdgwk\" (UniqueName: \"kubernetes.io/projected/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-kube-api-access-qdgwk\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.317208 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-public-tls-certs\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.317236 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-credential-keys\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.317276 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-config-data\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.317359 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f75b82b-237c-4bcd-9bd4-8e72a43204aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.338460 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-combined-ca-bundle\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.342499 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-internal-tls-certs\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.353090 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-fernet-keys\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.367231 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-scripts\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.367910 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-config-data\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.370089 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-public-tls-certs\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.407681 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-credential-keys\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.431353 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdgwk\" (UniqueName: \"kubernetes.io/projected/cddd20a0-4ab1-4747-86ec-3dbd6ae06f74-kube-api-access-qdgwk\") pod \"keystone-7d65b7d547-xbx74\" (UID: \"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74\") " pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.473817 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5e6363-1256-4dc1-b84b-a40298dd9d39-kube-api-access-9p4f5" (OuterVolumeSpecName: "kube-api-access-9p4f5") pod "6d5e6363-1256-4dc1-b84b-a40298dd9d39" (UID: "6d5e6363-1256-4dc1-b84b-a40298dd9d39"). InnerVolumeSpecName "kube-api-access-9p4f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.525120 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p4f5\" (UniqueName: \"kubernetes.io/projected/6d5e6363-1256-4dc1-b84b-a40298dd9d39-kube-api-access-9p4f5\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.689528 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.746315 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d5e6363-1256-4dc1-b84b-a40298dd9d39" (UID: "6d5e6363-1256-4dc1-b84b-a40298dd9d39"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.770168 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d5e6363-1256-4dc1-b84b-a40298dd9d39" (UID: "6d5e6363-1256-4dc1-b84b-a40298dd9d39"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.771081 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-config" (OuterVolumeSpecName: "config") pod "6d5e6363-1256-4dc1-b84b-a40298dd9d39" (UID: "6d5e6363-1256-4dc1-b84b-a40298dd9d39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.798348 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d5e6363-1256-4dc1-b84b-a40298dd9d39" (UID: "6d5e6363-1256-4dc1-b84b-a40298dd9d39"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.803676 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.804680 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.810114 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.810159 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.810194 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.811460 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df660b89ae8561454b3d98787dfb50644dbca73ff06ad5c87819e47a0f113710"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.811526 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://df660b89ae8561454b3d98787dfb50644dbca73ff06ad5c87819e47a0f113710" gracePeriod=600 Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.819523 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6d5e6363-1256-4dc1-b84b-a40298dd9d39" (UID: "6d5e6363-1256-4dc1-b84b-a40298dd9d39"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.833047 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.833097 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.833109 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.833120 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.833133 4930 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d5e6363-1256-4dc1-b84b-a40298dd9d39-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.918522 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:17:01 crc kubenswrapper[4930]: I1124 12:17:01.919733 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.178982 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" event={"ID":"6d5e6363-1256-4dc1-b84b-a40298dd9d39","Type":"ContainerDied","Data":"80a5cde113a2d55b63db5996ab48a27272f4105ecde030b7cea331a481157f5c"} Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.179345 4930 scope.go:117] "RemoveContainer" containerID="aa63a2b748b51592eb2142780d3fb2b06e0da28395faa63243c01c8b137b35e6" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.179584 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c654c9745-mr8zv" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.196156 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="df660b89ae8561454b3d98787dfb50644dbca73ff06ad5c87819e47a0f113710" exitCode=0 Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.196284 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"df660b89ae8561454b3d98787dfb50644dbca73ff06ad5c87819e47a0f113710"} Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.205162 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dfb57686-drx5k" event={"ID":"dc6e227e-e6f0-4244-80cf-67e85867d21f","Type":"ContainerStarted","Data":"c8307f76e6b5ff9c0312c956e3bf480236593b6f373421abb7ecb13e9864d901"} Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.207355 4930 generic.go:334] "Generic (PLEG): container finished" podID="9299ea16-3ac9-4356-916d-663e04e08206" containerID="ca156202cb7c8fbbcad73bdc708fb44228e24ed48b067d8dea16447b768f9512" exitCode=0 Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.209509 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" event={"ID":"9299ea16-3ac9-4356-916d-663e04e08206","Type":"ContainerDied","Data":"ca156202cb7c8fbbcad73bdc708fb44228e24ed48b067d8dea16447b768f9512"} Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.212137 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.212164 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.212175 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-mr8zv"] Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.212203 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.214778 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.222295 4930 scope.go:117] "RemoveContainer" containerID="2e8d46714b60a6f25a6b7dd5f076fa9710476b72a47b6f357fefbcdbb841c623" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.223166 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-mr8zv"] Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.274166 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d65b7d547-xbx74"] Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.297784 4930 scope.go:117] "RemoveContainer" containerID="c44ba46dc50db3a20b23969f9cbea1fb9792d70b783114e4cab0eaa15b434f1d" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.755447 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pzqxp" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.784805 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.866532 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-config-data\") pod \"44fb1f8c-0796-4310-b053-8222837cfbf2\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.866672 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdfx9\" (UniqueName: \"kubernetes.io/projected/44fb1f8c-0796-4310-b053-8222837cfbf2-kube-api-access-fdfx9\") pod \"44fb1f8c-0796-4310-b053-8222837cfbf2\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.866909 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-scripts\") pod \"44fb1f8c-0796-4310-b053-8222837cfbf2\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.866950 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44fb1f8c-0796-4310-b053-8222837cfbf2-logs\") pod \"44fb1f8c-0796-4310-b053-8222837cfbf2\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.867006 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-combined-ca-bundle\") pod \"44fb1f8c-0796-4310-b053-8222837cfbf2\" (UID: \"44fb1f8c-0796-4310-b053-8222837cfbf2\") " Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.868149 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44fb1f8c-0796-4310-b053-8222837cfbf2-logs" (OuterVolumeSpecName: "logs") pod "44fb1f8c-0796-4310-b053-8222837cfbf2" (UID: "44fb1f8c-0796-4310-b053-8222837cfbf2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.917450 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-scripts" (OuterVolumeSpecName: "scripts") pod "44fb1f8c-0796-4310-b053-8222837cfbf2" (UID: "44fb1f8c-0796-4310-b053-8222837cfbf2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.920848 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44fb1f8c-0796-4310-b053-8222837cfbf2-kube-api-access-fdfx9" (OuterVolumeSpecName: "kube-api-access-fdfx9") pod "44fb1f8c-0796-4310-b053-8222837cfbf2" (UID: "44fb1f8c-0796-4310-b053-8222837cfbf2"). InnerVolumeSpecName "kube-api-access-fdfx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.969413 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.969448 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44fb1f8c-0796-4310-b053-8222837cfbf2-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.969478 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdfx9\" (UniqueName: \"kubernetes.io/projected/44fb1f8c-0796-4310-b053-8222837cfbf2-kube-api-access-fdfx9\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.988639 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84d7bcd766-9sdc2"] Nov 24 12:17:02 crc kubenswrapper[4930]: E1124 12:17:02.989126 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44fb1f8c-0796-4310-b053-8222837cfbf2" containerName="placement-db-sync" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.989143 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="44fb1f8c-0796-4310-b053-8222837cfbf2" containerName="placement-db-sync" Nov 24 12:17:02 crc kubenswrapper[4930]: E1124 12:17:02.989168 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5e6363-1256-4dc1-b84b-a40298dd9d39" containerName="dnsmasq-dns" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.989177 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5e6363-1256-4dc1-b84b-a40298dd9d39" containerName="dnsmasq-dns" Nov 24 12:17:02 crc kubenswrapper[4930]: E1124 12:17:02.989194 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5e6363-1256-4dc1-b84b-a40298dd9d39" containerName="init" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.989202 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5e6363-1256-4dc1-b84b-a40298dd9d39" containerName="init" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.989417 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5e6363-1256-4dc1-b84b-a40298dd9d39" containerName="dnsmasq-dns" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.989444 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="44fb1f8c-0796-4310-b053-8222837cfbf2" containerName="placement-db-sync" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.990691 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.994726 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 12:17:02 crc kubenswrapper[4930]: I1124 12:17:02.994898 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.024306 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84d7bcd766-9sdc2"] Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.070747 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44fb1f8c-0796-4310-b053-8222837cfbf2" (UID: "44fb1f8c-0796-4310-b053-8222837cfbf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.072233 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-internal-tls-certs\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.072373 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-public-tls-certs\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.072522 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-config-data\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.072699 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-combined-ca-bundle\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.072842 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6894j\" (UniqueName: \"kubernetes.io/projected/513243cf-0c25-46b1-a535-906324dca4bb-kube-api-access-6894j\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.073132 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/513243cf-0c25-46b1-a535-906324dca4bb-logs\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.073275 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-config-data-custom\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.073423 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.074701 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-config-data" (OuterVolumeSpecName: "config-data") pod "44fb1f8c-0796-4310-b053-8222837cfbf2" (UID: "44fb1f8c-0796-4310-b053-8222837cfbf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.174843 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-internal-tls-certs\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.175189 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-public-tls-certs\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.175228 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-config-data\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.175247 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-combined-ca-bundle\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.175275 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6894j\" (UniqueName: \"kubernetes.io/projected/513243cf-0c25-46b1-a535-906324dca4bb-kube-api-access-6894j\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.175357 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/513243cf-0c25-46b1-a535-906324dca4bb-logs\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.175395 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-config-data-custom\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.175451 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fb1f8c-0796-4310-b053-8222837cfbf2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.177735 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/513243cf-0c25-46b1-a535-906324dca4bb-logs\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.182733 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-config-data-custom\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.183311 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-internal-tls-certs\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.193215 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-combined-ca-bundle\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.200845 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-config-data\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.207302 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6894j\" (UniqueName: \"kubernetes.io/projected/513243cf-0c25-46b1-a535-906324dca4bb-kube-api-access-6894j\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.209695 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/513243cf-0c25-46b1-a535-906324dca4bb-public-tls-certs\") pod \"barbican-api-84d7bcd766-9sdc2\" (UID: \"513243cf-0c25-46b1-a535-906324dca4bb\") " pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.320667 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-784c754f4d-ttmj6"] Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.343351 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-784c754f4d-ttmj6"] Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.343732 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.346823 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"9b35d5fa3eb364268da5b5e0253eae62e65a2c6dfd8d0e613fb3c92e7e1d100d"} Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.357098 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.357152 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.357369 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.380215 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d65b7d547-xbx74" event={"ID":"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74","Type":"ContainerStarted","Data":"d14648629882da70aaf545c4754c2b7ec86964b918b946062c924f6aecb10241"} Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.380275 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d65b7d547-xbx74" event={"ID":"cddd20a0-4ab1-4747-86ec-3dbd6ae06f74","Type":"ContainerStarted","Data":"076d80803a8a30b9d4248c9a4611b34e582d09ca9f145ef7063a063259a08eb4"} Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.380810 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.385167 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pzqxp" event={"ID":"44fb1f8c-0796-4310-b053-8222837cfbf2","Type":"ContainerDied","Data":"a96e3bcfc422663b448d327841ceb08d0138d74afbe86d8763d5b4af252fc39e"} Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.385203 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a96e3bcfc422663b448d327841ceb08d0138d74afbe86d8763d5b4af252fc39e" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.385261 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pzqxp" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.431932 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dfb57686-drx5k" event={"ID":"dc6e227e-e6f0-4244-80cf-67e85867d21f","Type":"ContainerStarted","Data":"c5f426d1e1451e174d5b183a5d679db84119fe06fedbfffd340bb4a6006719fa"} Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.433058 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.433113 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.457024 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" event={"ID":"9299ea16-3ac9-4356-916d-663e04e08206","Type":"ContainerStarted","Data":"72e00ea989d6521591f90edae839633b114a25077ccf4434b538ebec9202e01c"} Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.458317 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.458400 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7d65b7d547-xbx74" podStartSLOduration=3.458385862 podStartE2EDuration="3.458385862s" podCreationTimestamp="2025-11-24 12:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:03.433573809 +0000 UTC m=+1070.047901759" watchObservedRunningTime="2025-11-24 12:17:03.458385862 +0000 UTC m=+1070.072713812" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.484806 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-config-data\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.488179 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-scripts\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.488377 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-combined-ca-bundle\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.488510 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-public-tls-certs\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.488677 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb758c76-2ee4-4bac-8a07-d44205706854-logs\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.490877 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swg28\" (UniqueName: \"kubernetes.io/projected/bb758c76-2ee4-4bac-8a07-d44205706854-kube-api-access-swg28\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.496819 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-internal-tls-certs\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.499686 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74dfb57686-drx5k" podStartSLOduration=4.499662069 podStartE2EDuration="4.499662069s" podCreationTimestamp="2025-11-24 12:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:03.459831714 +0000 UTC m=+1070.074159664" watchObservedRunningTime="2025-11-24 12:17:03.499662069 +0000 UTC m=+1070.113990009" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.527948 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" podStartSLOduration=4.527925642 podStartE2EDuration="4.527925642s" podCreationTimestamp="2025-11-24 12:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:03.496638352 +0000 UTC m=+1070.110966552" watchObservedRunningTime="2025-11-24 12:17:03.527925642 +0000 UTC m=+1070.142253592" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.561925 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.599501 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swg28\" (UniqueName: \"kubernetes.io/projected/bb758c76-2ee4-4bac-8a07-d44205706854-kube-api-access-swg28\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.599585 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-internal-tls-certs\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.599859 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-config-data\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.599883 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-scripts\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.599966 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-combined-ca-bundle\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.600077 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-public-tls-certs\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.600555 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb758c76-2ee4-4bac-8a07-d44205706854-logs\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.600971 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb758c76-2ee4-4bac-8a07-d44205706854-logs\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.628946 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-scripts\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.638570 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-combined-ca-bundle\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.641410 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swg28\" (UniqueName: \"kubernetes.io/projected/bb758c76-2ee4-4bac-8a07-d44205706854-kube-api-access-swg28\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.659426 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-public-tls-certs\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.660258 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-internal-tls-certs\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.666447 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb758c76-2ee4-4bac-8a07-d44205706854-config-data\") pod \"placement-784c754f4d-ttmj6\" (UID: \"bb758c76-2ee4-4bac-8a07-d44205706854\") " pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:03 crc kubenswrapper[4930]: I1124 12:17:03.820670 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:04 crc kubenswrapper[4930]: I1124 12:17:04.121583 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5e6363-1256-4dc1-b84b-a40298dd9d39" path="/var/lib/kubelet/pods/6d5e6363-1256-4dc1-b84b-a40298dd9d39/volumes" Nov 24 12:17:04 crc kubenswrapper[4930]: I1124 12:17:04.260700 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84d7bcd766-9sdc2"] Nov 24 12:17:04 crc kubenswrapper[4930]: I1124 12:17:04.470429 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:17:04 crc kubenswrapper[4930]: I1124 12:17:04.470453 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:17:04 crc kubenswrapper[4930]: I1124 12:17:04.470452 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:17:04 crc kubenswrapper[4930]: I1124 12:17:04.470482 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:17:06 crc kubenswrapper[4930]: I1124 12:17:06.244450 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:17:06 crc kubenswrapper[4930]: I1124 12:17:06.244920 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:17:07 crc kubenswrapper[4930]: I1124 12:17:07.483918 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:17:07 crc kubenswrapper[4930]: I1124 12:17:07.484493 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:17:07 crc kubenswrapper[4930]: I1124 12:17:07.495117 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:17:07 crc kubenswrapper[4930]: I1124 12:17:07.501293 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84d7bcd766-9sdc2" event={"ID":"513243cf-0c25-46b1-a535-906324dca4bb","Type":"ContainerStarted","Data":"3c7f6a98963e4c09a044f366896f5f5065e84b5ab0f6f63bfedf5ed18b9361fe"} Nov 24 12:17:07 crc kubenswrapper[4930]: I1124 12:17:07.516369 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:17:09 crc kubenswrapper[4930]: I1124 12:17:09.592420 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:17:09 crc kubenswrapper[4930]: I1124 12:17:09.666264 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-vcnwb"] Nov 24 12:17:09 crc kubenswrapper[4930]: I1124 12:17:09.666612 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" podUID="6bcc447f-d403-4536-8f54-f728fa999a19" containerName="dnsmasq-dns" containerID="cri-o://935b0f5442c3bfffb88a6a25069913cd16ac8f8b9b0c2938e5f0e6d3ef9a5574" gracePeriod=10 Nov 24 12:17:09 crc kubenswrapper[4930]: E1124 12:17:09.865186 4930 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bcc447f_d403_4536_8f54_f728fa999a19.slice/crio-935b0f5442c3bfffb88a6a25069913cd16ac8f8b9b0c2938e5f0e6d3ef9a5574.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bcc447f_d403_4536_8f54_f728fa999a19.slice/crio-conmon-935b0f5442c3bfffb88a6a25069913cd16ac8f8b9b0c2938e5f0e6d3ef9a5574.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:17:10 crc kubenswrapper[4930]: I1124 12:17:10.539326 4930 generic.go:334] "Generic (PLEG): container finished" podID="6bcc447f-d403-4536-8f54-f728fa999a19" containerID="935b0f5442c3bfffb88a6a25069913cd16ac8f8b9b0c2938e5f0e6d3ef9a5574" exitCode=0 Nov 24 12:17:10 crc kubenswrapper[4930]: I1124 12:17:10.539402 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" event={"ID":"6bcc447f-d403-4536-8f54-f728fa999a19","Type":"ContainerDied","Data":"935b0f5442c3bfffb88a6a25069913cd16ac8f8b9b0c2938e5f0e6d3ef9a5574"} Nov 24 12:17:11 crc kubenswrapper[4930]: I1124 12:17:11.698285 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:17:11 crc kubenswrapper[4930]: I1124 12:17:11.805061 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69b96dd4dd-2xcvn" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 24 12:17:11 crc kubenswrapper[4930]: I1124 12:17:11.922113 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b7594b454-4gfnw" podUID="8851e459-770d-4a08-8b35-41e3e060608b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Nov 24 12:17:12 crc kubenswrapper[4930]: I1124 12:17:12.393070 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.013635 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.110013 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zffq8\" (UniqueName: \"kubernetes.io/projected/6bcc447f-d403-4536-8f54-f728fa999a19-kube-api-access-zffq8\") pod \"6bcc447f-d403-4536-8f54-f728fa999a19\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.110060 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-svc\") pod \"6bcc447f-d403-4536-8f54-f728fa999a19\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.110092 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-nb\") pod \"6bcc447f-d403-4536-8f54-f728fa999a19\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.110121 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-config\") pod \"6bcc447f-d403-4536-8f54-f728fa999a19\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.110191 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-swift-storage-0\") pod \"6bcc447f-d403-4536-8f54-f728fa999a19\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.110220 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-sb\") pod \"6bcc447f-d403-4536-8f54-f728fa999a19\" (UID: \"6bcc447f-d403-4536-8f54-f728fa999a19\") " Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.123228 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-784c754f4d-ttmj6"] Nov 24 12:17:13 crc kubenswrapper[4930]: W1124 12:17:13.125636 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb758c76_2ee4_4bac_8a07_d44205706854.slice/crio-cad5cdb9d19ac96e6c8a632eb81dc7a07ca8582e010cb35d65ff76ca4a8c3963 WatchSource:0}: Error finding container cad5cdb9d19ac96e6c8a632eb81dc7a07ca8582e010cb35d65ff76ca4a8c3963: Status 404 returned error can't find the container with id cad5cdb9d19ac96e6c8a632eb81dc7a07ca8582e010cb35d65ff76ca4a8c3963 Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.142301 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bcc447f-d403-4536-8f54-f728fa999a19-kube-api-access-zffq8" (OuterVolumeSpecName: "kube-api-access-zffq8") pod "6bcc447f-d403-4536-8f54-f728fa999a19" (UID: "6bcc447f-d403-4536-8f54-f728fa999a19"). InnerVolumeSpecName "kube-api-access-zffq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.212070 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zffq8\" (UniqueName: \"kubernetes.io/projected/6bcc447f-d403-4536-8f54-f728fa999a19-kube-api-access-zffq8\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.260180 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6bcc447f-d403-4536-8f54-f728fa999a19" (UID: "6bcc447f-d403-4536-8f54-f728fa999a19"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.303289 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-config" (OuterVolumeSpecName: "config") pod "6bcc447f-d403-4536-8f54-f728fa999a19" (UID: "6bcc447f-d403-4536-8f54-f728fa999a19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.308101 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6bcc447f-d403-4536-8f54-f728fa999a19" (UID: "6bcc447f-d403-4536-8f54-f728fa999a19"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.311285 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bcc447f-d403-4536-8f54-f728fa999a19" (UID: "6bcc447f-d403-4536-8f54-f728fa999a19"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.313656 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.313698 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.313712 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.313724 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.314461 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6bcc447f-d403-4536-8f54-f728fa999a19" (UID: "6bcc447f-d403-4536-8f54-f728fa999a19"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.414815 4930 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bcc447f-d403-4536-8f54-f728fa999a19-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.602707 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" event={"ID":"b24c3d9b-ee6d-47ef-9391-91a395edbfbd","Type":"ContainerStarted","Data":"56dd45254571cc3f7ce499267e52eb277392da98578a45cf9571a9782b92808e"} Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.684042 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" event={"ID":"6bcc447f-d403-4536-8f54-f728fa999a19","Type":"ContainerDied","Data":"73ccdc8bd1365c0f92131701df327d81693b62df7b819b5e4660b7938ddb4b9c"} Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.684101 4930 scope.go:117] "RemoveContainer" containerID="935b0f5442c3bfffb88a6a25069913cd16ac8f8b9b0c2938e5f0e6d3ef9a5574" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.684186 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-vcnwb" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.701416 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84d7bcd766-9sdc2" event={"ID":"513243cf-0c25-46b1-a535-906324dca4bb","Type":"ContainerStarted","Data":"7d3bd365aa150ef6eabc4ac6b53a9651f207a0143b8d3da5fd042ba7ca0bbe9e"} Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.704399 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-784c754f4d-ttmj6" event={"ID":"bb758c76-2ee4-4bac-8a07-d44205706854","Type":"ContainerStarted","Data":"cad5cdb9d19ac96e6c8a632eb81dc7a07ca8582e010cb35d65ff76ca4a8c3963"} Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.735239 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b86dc847c-csn2f" event={"ID":"4a583517-6311-464a-b855-2a2d1e788461","Type":"ContainerStarted","Data":"3de00a34416778241d38abbdf3e82cf156cc60e4c18dc9eb7d1b1f4b47c072e0"} Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.746096 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9","Type":"ContainerStarted","Data":"b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3"} Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.793888 4930 scope.go:117] "RemoveContainer" containerID="b8836368a310634418cbe3d6d709a21b5f8c7b65f9e1b592dedf4c1375d4e848" Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.828496 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-vcnwb"] Nov 24 12:17:13 crc kubenswrapper[4930]: I1124 12:17:13.847839 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-vcnwb"] Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.112643 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bcc447f-d403-4536-8f54-f728fa999a19" path="/var/lib/kubelet/pods/6bcc447f-d403-4536-8f54-f728fa999a19/volumes" Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.789667 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rcfd6" event={"ID":"9169db1f-c94f-45a3-bc97-6ad40d17b7d1","Type":"ContainerStarted","Data":"f840e1d54a7caffd606ed22fbeef274096c181663d935deb0122f4a5fee46fda"} Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.803262 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" event={"ID":"b24c3d9b-ee6d-47ef-9391-91a395edbfbd","Type":"ContainerStarted","Data":"bba0f27e2e65a414315f469839972b0a9981fce6969f25b3b6cf233c52834438"} Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.810479 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84d7bcd766-9sdc2" event={"ID":"513243cf-0c25-46b1-a535-906324dca4bb","Type":"ContainerStarted","Data":"d9b03c3329c6c57733e4cd4c8d1c87dc3c911c51a3b608e0d1674c0e60ae08ff"} Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.811528 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.811589 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.826922 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rcfd6" podStartSLOduration=5.65758502 podStartE2EDuration="52.826903155s" podCreationTimestamp="2025-11-24 12:16:22 +0000 UTC" firstStartedPulling="2025-11-24 12:16:25.461091868 +0000 UTC m=+1032.075419818" lastFinishedPulling="2025-11-24 12:17:12.630409993 +0000 UTC m=+1079.244737953" observedRunningTime="2025-11-24 12:17:14.814757376 +0000 UTC m=+1081.429085326" watchObservedRunningTime="2025-11-24 12:17:14.826903155 +0000 UTC m=+1081.441231105" Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.834910 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-784c754f4d-ttmj6" event={"ID":"bb758c76-2ee4-4bac-8a07-d44205706854","Type":"ContainerStarted","Data":"28aba56b4da050d44bed3febfb0eaa8e53a5d3b0fe86478e04cfefc75551bbeb"} Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.835225 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-784c754f4d-ttmj6" event={"ID":"bb758c76-2ee4-4bac-8a07-d44205706854","Type":"ContainerStarted","Data":"329168ce6a5ac4106a35879aee56778c93b263f9e759fb86efd2810dfa72cd8d"} Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.835252 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.835626 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.839111 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84d7bcd766-9sdc2" podStartSLOduration=12.839074265 podStartE2EDuration="12.839074265s" podCreationTimestamp="2025-11-24 12:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:14.834813063 +0000 UTC m=+1081.449141013" watchObservedRunningTime="2025-11-24 12:17:14.839074265 +0000 UTC m=+1081.453402215" Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.853207 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b86dc847c-csn2f" event={"ID":"4a583517-6311-464a-b855-2a2d1e788461","Type":"ContainerStarted","Data":"d7a006d2e0fa5811228d49c897b21d549061f221ac484a90ff3b58dc2d678c49"} Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.884079 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-86bf5c4cf6-tbptj" podStartSLOduration=3.679008917 podStartE2EDuration="15.884050209s" podCreationTimestamp="2025-11-24 12:16:59 +0000 UTC" firstStartedPulling="2025-11-24 12:17:00.201871543 +0000 UTC m=+1066.816199493" lastFinishedPulling="2025-11-24 12:17:12.406912835 +0000 UTC m=+1079.021240785" observedRunningTime="2025-11-24 12:17:14.875341579 +0000 UTC m=+1081.489669539" watchObservedRunningTime="2025-11-24 12:17:14.884050209 +0000 UTC m=+1081.498378159" Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.908218 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-784c754f4d-ttmj6" podStartSLOduration=11.908199794 podStartE2EDuration="11.908199794s" podCreationTimestamp="2025-11-24 12:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:14.906232427 +0000 UTC m=+1081.520560377" watchObservedRunningTime="2025-11-24 12:17:14.908199794 +0000 UTC m=+1081.522527744" Nov 24 12:17:14 crc kubenswrapper[4930]: I1124 12:17:14.934825 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-b86dc847c-csn2f" podStartSLOduration=3.666434906 podStartE2EDuration="15.934807659s" podCreationTimestamp="2025-11-24 12:16:59 +0000 UTC" firstStartedPulling="2025-11-24 12:17:00.146304085 +0000 UTC m=+1066.760632035" lastFinishedPulling="2025-11-24 12:17:12.414676838 +0000 UTC m=+1079.029004788" observedRunningTime="2025-11-24 12:17:14.928306972 +0000 UTC m=+1081.542634922" watchObservedRunningTime="2025-11-24 12:17:14.934807659 +0000 UTC m=+1081.549135609" Nov 24 12:17:18 crc kubenswrapper[4930]: I1124 12:17:18.895906 4930 generic.go:334] "Generic (PLEG): container finished" podID="9169db1f-c94f-45a3-bc97-6ad40d17b7d1" containerID="f840e1d54a7caffd606ed22fbeef274096c181663d935deb0122f4a5fee46fda" exitCode=0 Nov 24 12:17:18 crc kubenswrapper[4930]: I1124 12:17:18.896397 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rcfd6" event={"ID":"9169db1f-c94f-45a3-bc97-6ad40d17b7d1","Type":"ContainerDied","Data":"f840e1d54a7caffd606ed22fbeef274096c181663d935deb0122f4a5fee46fda"} Nov 24 12:17:20 crc kubenswrapper[4930]: I1124 12:17:20.054745 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:20 crc kubenswrapper[4930]: I1124 12:17:20.634323 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84d7bcd766-9sdc2" Nov 24 12:17:20 crc kubenswrapper[4930]: I1124 12:17:20.721857 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74dfb57686-drx5k"] Nov 24 12:17:20 crc kubenswrapper[4930]: I1124 12:17:20.722071 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74dfb57686-drx5k" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerName="barbican-api-log" containerID="cri-o://c8307f76e6b5ff9c0312c956e3bf480236593b6f373421abb7ecb13e9864d901" gracePeriod=30 Nov 24 12:17:20 crc kubenswrapper[4930]: I1124 12:17:20.722366 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74dfb57686-drx5k" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerName="barbican-api" containerID="cri-o://c5f426d1e1451e174d5b183a5d679db84119fe06fedbfffd340bb4a6006719fa" gracePeriod=30 Nov 24 12:17:20 crc kubenswrapper[4930]: I1124 12:17:20.930273 4930 generic.go:334] "Generic (PLEG): container finished" podID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerID="c8307f76e6b5ff9c0312c956e3bf480236593b6f373421abb7ecb13e9864d901" exitCode=143 Nov 24 12:17:20 crc kubenswrapper[4930]: I1124 12:17:20.931158 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dfb57686-drx5k" event={"ID":"dc6e227e-e6f0-4244-80cf-67e85867d21f","Type":"ContainerDied","Data":"c8307f76e6b5ff9c0312c956e3bf480236593b6f373421abb7ecb13e9864d901"} Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.206266 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.314024 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-combined-ca-bundle\") pod \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.314117 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-config-data\") pod \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.314291 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g62nv\" (UniqueName: \"kubernetes.io/projected/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-kube-api-access-g62nv\") pod \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.314335 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-db-sync-config-data\") pod \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.314369 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-etc-machine-id\") pod \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.314413 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-scripts\") pod \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\" (UID: \"9169db1f-c94f-45a3-bc97-6ad40d17b7d1\") " Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.321848 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9169db1f-c94f-45a3-bc97-6ad40d17b7d1" (UID: "9169db1f-c94f-45a3-bc97-6ad40d17b7d1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.325312 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-scripts" (OuterVolumeSpecName: "scripts") pod "9169db1f-c94f-45a3-bc97-6ad40d17b7d1" (UID: "9169db1f-c94f-45a3-bc97-6ad40d17b7d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.327806 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9169db1f-c94f-45a3-bc97-6ad40d17b7d1" (UID: "9169db1f-c94f-45a3-bc97-6ad40d17b7d1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.331862 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-kube-api-access-g62nv" (OuterVolumeSpecName: "kube-api-access-g62nv") pod "9169db1f-c94f-45a3-bc97-6ad40d17b7d1" (UID: "9169db1f-c94f-45a3-bc97-6ad40d17b7d1"). InnerVolumeSpecName "kube-api-access-g62nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.390920 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9169db1f-c94f-45a3-bc97-6ad40d17b7d1" (UID: "9169db1f-c94f-45a3-bc97-6ad40d17b7d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.398614 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-config-data" (OuterVolumeSpecName: "config-data") pod "9169db1f-c94f-45a3-bc97-6ad40d17b7d1" (UID: "9169db1f-c94f-45a3-bc97-6ad40d17b7d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.416581 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.416611 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.416621 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g62nv\" (UniqueName: \"kubernetes.io/projected/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-kube-api-access-g62nv\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.416635 4930 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.416643 4930 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.416651 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9169db1f-c94f-45a3-bc97-6ad40d17b7d1-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.804191 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69b96dd4dd-2xcvn" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.919699 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b7594b454-4gfnw" podUID="8851e459-770d-4a08-8b35-41e3e060608b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.940842 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rcfd6" event={"ID":"9169db1f-c94f-45a3-bc97-6ad40d17b7d1","Type":"ContainerDied","Data":"6e051fbb02b7de4fe8f850bb89144fa0dd346476448a91def85fecbd8bee41d0"} Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.940880 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e051fbb02b7de4fe8f850bb89144fa0dd346476448a91def85fecbd8bee41d0" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.940894 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rcfd6" Nov 24 12:17:21 crc kubenswrapper[4930]: I1124 12:17:21.950483 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.526276 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-797bbc649-cjwcc"] Nov 24 12:17:22 crc kubenswrapper[4930]: E1124 12:17:22.543760 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9169db1f-c94f-45a3-bc97-6ad40d17b7d1" containerName="cinder-db-sync" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.543802 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="9169db1f-c94f-45a3-bc97-6ad40d17b7d1" containerName="cinder-db-sync" Nov 24 12:17:22 crc kubenswrapper[4930]: E1124 12:17:22.543838 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bcc447f-d403-4536-8f54-f728fa999a19" containerName="init" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.543845 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bcc447f-d403-4536-8f54-f728fa999a19" containerName="init" Nov 24 12:17:22 crc kubenswrapper[4930]: E1124 12:17:22.543879 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bcc447f-d403-4536-8f54-f728fa999a19" containerName="dnsmasq-dns" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.543886 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bcc447f-d403-4536-8f54-f728fa999a19" containerName="dnsmasq-dns" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.544219 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bcc447f-d403-4536-8f54-f728fa999a19" containerName="dnsmasq-dns" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.544240 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="9169db1f-c94f-45a3-bc97-6ad40d17b7d1" containerName="cinder-db-sync" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.545444 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.552119 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-cjwcc"] Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.567799 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.569194 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.572681 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.572999 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.573137 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gd7zr" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.573264 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.585191 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.638819 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-svc\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.638902 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.638996 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-config\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.639025 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.639084 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzjbv\" (UniqueName: \"kubernetes.io/projected/77b75b6e-7ded-4307-8e62-b15ff18acffe-kube-api-access-qzjbv\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.639146 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.711794 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.717612 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.722130 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.732328 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747167 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-svc\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747227 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747268 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747297 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747321 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747376 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66691740-aef1-4155-b12a-a7ce7f9c5f93-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747413 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2s92\" (UniqueName: \"kubernetes.io/projected/66691740-aef1-4155-b12a-a7ce7f9c5f93-kube-api-access-h2s92\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747448 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-config\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747468 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747522 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzjbv\" (UniqueName: \"kubernetes.io/projected/77b75b6e-7ded-4307-8e62-b15ff18acffe-kube-api-access-qzjbv\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747579 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-scripts\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.747634 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.748652 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.749323 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-svc\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.749972 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-config\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.750993 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.751101 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.808723 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzjbv\" (UniqueName: \"kubernetes.io/projected/77b75b6e-7ded-4307-8e62-b15ff18acffe-kube-api-access-qzjbv\") pod \"dnsmasq-dns-797bbc649-cjwcc\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.848855 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-scripts\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.849946 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.849969 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.850760 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data-custom\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.850806 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.850832 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.850861 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.850907 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21eda578-0e80-48af-a2ce-5eb783748a04-etc-machine-id\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.850930 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66691740-aef1-4155-b12a-a7ce7f9c5f93-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.850949 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2s92\" (UniqueName: \"kubernetes.io/projected/66691740-aef1-4155-b12a-a7ce7f9c5f93-kube-api-access-h2s92\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.850979 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbn9x\" (UniqueName: \"kubernetes.io/projected/21eda578-0e80-48af-a2ce-5eb783748a04-kube-api-access-hbn9x\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.851040 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21eda578-0e80-48af-a2ce-5eb783748a04-logs\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.851059 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-scripts\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.853226 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66691740-aef1-4155-b12a-a7ce7f9c5f93-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.862521 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.862942 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-scripts\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.863651 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.865903 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.886023 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.901259 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2s92\" (UniqueName: \"kubernetes.io/projected/66691740-aef1-4155-b12a-a7ce7f9c5f93-kube-api-access-h2s92\") pod \"cinder-scheduler-0\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.926350 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.954668 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-scripts\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.954933 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.954954 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data-custom\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.954981 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.955022 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21eda578-0e80-48af-a2ce-5eb783748a04-etc-machine-id\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.955054 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbn9x\" (UniqueName: \"kubernetes.io/projected/21eda578-0e80-48af-a2ce-5eb783748a04-kube-api-access-hbn9x\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.955100 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21eda578-0e80-48af-a2ce-5eb783748a04-logs\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.955497 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21eda578-0e80-48af-a2ce-5eb783748a04-logs\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.957429 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21eda578-0e80-48af-a2ce-5eb783748a04-etc-machine-id\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.967612 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.974464 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data-custom\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.975796 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-scripts\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.979594 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:22 crc kubenswrapper[4930]: I1124 12:17:22.988166 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbn9x\" (UniqueName: \"kubernetes.io/projected/21eda578-0e80-48af-a2ce-5eb783748a04-kube-api-access-hbn9x\") pod \"cinder-api-0\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " pod="openstack/cinder-api-0" Nov 24 12:17:23 crc kubenswrapper[4930]: I1124 12:17:23.075976 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:17:24 crc kubenswrapper[4930]: I1124 12:17:24.771255 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-785757c67f-sl8rq" Nov 24 12:17:24 crc kubenswrapper[4930]: I1124 12:17:24.854519 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:17:24 crc kubenswrapper[4930]: I1124 12:17:24.861976 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74dfb57686-drx5k" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Nov 24 12:17:24 crc kubenswrapper[4930]: I1124 12:17:24.863026 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74dfb57686-drx5k" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Nov 24 12:17:24 crc kubenswrapper[4930]: I1124 12:17:24.870351 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-575d598bfb-msnzv"] Nov 24 12:17:24 crc kubenswrapper[4930]: I1124 12:17:24.870586 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-575d598bfb-msnzv" podUID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" containerName="neutron-api" containerID="cri-o://84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f" gracePeriod=30 Nov 24 12:17:24 crc kubenswrapper[4930]: I1124 12:17:24.871048 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-575d598bfb-msnzv" podUID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" containerName="neutron-httpd" containerID="cri-o://f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a" gracePeriod=30 Nov 24 12:17:24 crc kubenswrapper[4930]: E1124 12:17:24.955057 4930 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddced9a55_e50d_4a84_8876_25e6981347f8.slice/crio-conmon-bee4a24e4de09e86532ea06bb6bc0154e5cec307ba7511473076aded1752631f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddced9a55_e50d_4a84_8876_25e6981347f8.slice/crio-c724a9494e3a0adb1b5041991cef78607bc76e53098c5791432371f0b8a38e73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddced9a55_e50d_4a84_8876_25e6981347f8.slice/crio-conmon-c724a9494e3a0adb1b5041991cef78607bc76e53098c5791432371f0b8a38e73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddced9a55_e50d_4a84_8876_25e6981347f8.slice/crio-bee4a24e4de09e86532ea06bb6bc0154e5cec307ba7511473076aded1752631f.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.011629 4930 generic.go:334] "Generic (PLEG): container finished" podID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerID="c5f426d1e1451e174d5b183a5d679db84119fe06fedbfffd340bb4a6006719fa" exitCode=0 Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.011686 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dfb57686-drx5k" event={"ID":"dc6e227e-e6f0-4244-80cf-67e85867d21f","Type":"ContainerDied","Data":"c5f426d1e1451e174d5b183a5d679db84119fe06fedbfffd340bb4a6006719fa"} Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.013407 4930 generic.go:334] "Generic (PLEG): container finished" podID="dced9a55-e50d-4a84-8876-25e6981347f8" containerID="bee4a24e4de09e86532ea06bb6bc0154e5cec307ba7511473076aded1752631f" exitCode=137 Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.013427 4930 generic.go:334] "Generic (PLEG): container finished" podID="dced9a55-e50d-4a84-8876-25e6981347f8" containerID="c724a9494e3a0adb1b5041991cef78607bc76e53098c5791432371f0b8a38e73" exitCode=137 Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.013464 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6dcc7c9c-rcxf9" event={"ID":"dced9a55-e50d-4a84-8876-25e6981347f8","Type":"ContainerDied","Data":"bee4a24e4de09e86532ea06bb6bc0154e5cec307ba7511473076aded1752631f"} Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.013480 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6dcc7c9c-rcxf9" event={"ID":"dced9a55-e50d-4a84-8876-25e6981347f8","Type":"ContainerDied","Data":"c724a9494e3a0adb1b5041991cef78607bc76e53098c5791432371f0b8a38e73"} Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.015491 4930 generic.go:334] "Generic (PLEG): container finished" podID="cbd922a6-f938-478a-8db2-d99dc37f3a69" containerID="83e219941bf1309705d62b99197b40b292c80ca3cd3ed7de43869f46826a3910" exitCode=137 Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.015508 4930 generic.go:334] "Generic (PLEG): container finished" podID="cbd922a6-f938-478a-8db2-d99dc37f3a69" containerID="7a9872195563ac3837b27c862b5fc468d87289f1bf166b50477b2445fe494f1e" exitCode=137 Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.015550 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56d56cd8f5-hxqgp" event={"ID":"cbd922a6-f938-478a-8db2-d99dc37f3a69","Type":"ContainerDied","Data":"83e219941bf1309705d62b99197b40b292c80ca3cd3ed7de43869f46826a3910"} Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.015566 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56d56cd8f5-hxqgp" event={"ID":"cbd922a6-f938-478a-8db2-d99dc37f3a69","Type":"ContainerDied","Data":"7a9872195563ac3837b27c862b5fc468d87289f1bf166b50477b2445fe494f1e"} Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.019250 4930 generic.go:334] "Generic (PLEG): container finished" podID="9ecc66ca-44d6-4220-9f1c-2b054239f484" containerID="81839e0c0639b58d676fca72d2b02c94deeff5bd06adc2f682f8411e51fd2ca0" exitCode=137 Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.019280 4930 generic.go:334] "Generic (PLEG): container finished" podID="9ecc66ca-44d6-4220-9f1c-2b054239f484" containerID="95f3b7d02d3bc3c5dda4eb0b5d5bca10ac73478e4b03a3f2b832230e85f4141b" exitCode=137 Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.019302 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69679b8f55-8knvw" event={"ID":"9ecc66ca-44d6-4220-9f1c-2b054239f484","Type":"ContainerDied","Data":"81839e0c0639b58d676fca72d2b02c94deeff5bd06adc2f682f8411e51fd2ca0"} Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.019329 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69679b8f55-8knvw" event={"ID":"9ecc66ca-44d6-4220-9f1c-2b054239f484","Type":"ContainerDied","Data":"95f3b7d02d3bc3c5dda4eb0b5d5bca10ac73478e4b03a3f2b832230e85f4141b"} Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.271270 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.416799 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnggp\" (UniqueName: \"kubernetes.io/projected/dc6e227e-e6f0-4244-80cf-67e85867d21f-kube-api-access-bnggp\") pod \"dc6e227e-e6f0-4244-80cf-67e85867d21f\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.417270 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data\") pod \"dc6e227e-e6f0-4244-80cf-67e85867d21f\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.417407 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6e227e-e6f0-4244-80cf-67e85867d21f-logs\") pod \"dc6e227e-e6f0-4244-80cf-67e85867d21f\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.417483 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data-custom\") pod \"dc6e227e-e6f0-4244-80cf-67e85867d21f\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.417556 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-combined-ca-bundle\") pod \"dc6e227e-e6f0-4244-80cf-67e85867d21f\" (UID: \"dc6e227e-e6f0-4244-80cf-67e85867d21f\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.435055 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6e227e-e6f0-4244-80cf-67e85867d21f-kube-api-access-bnggp" (OuterVolumeSpecName: "kube-api-access-bnggp") pod "dc6e227e-e6f0-4244-80cf-67e85867d21f" (UID: "dc6e227e-e6f0-4244-80cf-67e85867d21f"). InnerVolumeSpecName "kube-api-access-bnggp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.441009 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6e227e-e6f0-4244-80cf-67e85867d21f-logs" (OuterVolumeSpecName: "logs") pod "dc6e227e-e6f0-4244-80cf-67e85867d21f" (UID: "dc6e227e-e6f0-4244-80cf-67e85867d21f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.463813 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dc6e227e-e6f0-4244-80cf-67e85867d21f" (UID: "dc6e227e-e6f0-4244-80cf-67e85867d21f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.488747 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc6e227e-e6f0-4244-80cf-67e85867d21f" (UID: "dc6e227e-e6f0-4244-80cf-67e85867d21f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.521796 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnggp\" (UniqueName: \"kubernetes.io/projected/dc6e227e-e6f0-4244-80cf-67e85867d21f-kube-api-access-bnggp\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.521831 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6e227e-e6f0-4244-80cf-67e85867d21f-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.521842 4930 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.521850 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.531274 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.568506 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data" (OuterVolumeSpecName: "config-data") pod "dc6e227e-e6f0-4244-80cf-67e85867d21f" (UID: "dc6e227e-e6f0-4244-80cf-67e85867d21f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.622791 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-config-data\") pod \"dced9a55-e50d-4a84-8876-25e6981347f8\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.622851 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5chn\" (UniqueName: \"kubernetes.io/projected/dced9a55-e50d-4a84-8876-25e6981347f8-kube-api-access-r5chn\") pod \"dced9a55-e50d-4a84-8876-25e6981347f8\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.622917 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dced9a55-e50d-4a84-8876-25e6981347f8-logs\") pod \"dced9a55-e50d-4a84-8876-25e6981347f8\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.623023 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dced9a55-e50d-4a84-8876-25e6981347f8-horizon-secret-key\") pod \"dced9a55-e50d-4a84-8876-25e6981347f8\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.623213 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-scripts\") pod \"dced9a55-e50d-4a84-8876-25e6981347f8\" (UID: \"dced9a55-e50d-4a84-8876-25e6981347f8\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.623527 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dced9a55-e50d-4a84-8876-25e6981347f8-logs" (OuterVolumeSpecName: "logs") pod "dced9a55-e50d-4a84-8876-25e6981347f8" (UID: "dced9a55-e50d-4a84-8876-25e6981347f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.623788 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dced9a55-e50d-4a84-8876-25e6981347f8-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.623814 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6e227e-e6f0-4244-80cf-67e85867d21f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.629869 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dced9a55-e50d-4a84-8876-25e6981347f8-kube-api-access-r5chn" (OuterVolumeSpecName: "kube-api-access-r5chn") pod "dced9a55-e50d-4a84-8876-25e6981347f8" (UID: "dced9a55-e50d-4a84-8876-25e6981347f8"). InnerVolumeSpecName "kube-api-access-r5chn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.630459 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dced9a55-e50d-4a84-8876-25e6981347f8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dced9a55-e50d-4a84-8876-25e6981347f8" (UID: "dced9a55-e50d-4a84-8876-25e6981347f8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.656687 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-config-data" (OuterVolumeSpecName: "config-data") pod "dced9a55-e50d-4a84-8876-25e6981347f8" (UID: "dced9a55-e50d-4a84-8876-25e6981347f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.673288 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-scripts" (OuterVolumeSpecName: "scripts") pod "dced9a55-e50d-4a84-8876-25e6981347f8" (UID: "dced9a55-e50d-4a84-8876-25e6981347f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.725784 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.725822 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5chn\" (UniqueName: \"kubernetes.io/projected/dced9a55-e50d-4a84-8876-25e6981347f8-kube-api-access-r5chn\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.725834 4930 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dced9a55-e50d-4a84-8876-25e6981347f8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.725844 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dced9a55-e50d-4a84-8876-25e6981347f8-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.732405 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.826742 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ecc66ca-44d6-4220-9f1c-2b054239f484-logs\") pod \"9ecc66ca-44d6-4220-9f1c-2b054239f484\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.826810 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ecc66ca-44d6-4220-9f1c-2b054239f484-horizon-secret-key\") pod \"9ecc66ca-44d6-4220-9f1c-2b054239f484\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.826866 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rnp5\" (UniqueName: \"kubernetes.io/projected/9ecc66ca-44d6-4220-9f1c-2b054239f484-kube-api-access-7rnp5\") pod \"9ecc66ca-44d6-4220-9f1c-2b054239f484\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.826952 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-scripts\") pod \"9ecc66ca-44d6-4220-9f1c-2b054239f484\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.827091 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-config-data\") pod \"9ecc66ca-44d6-4220-9f1c-2b054239f484\" (UID: \"9ecc66ca-44d6-4220-9f1c-2b054239f484\") " Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.828129 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ecc66ca-44d6-4220-9f1c-2b054239f484-logs" (OuterVolumeSpecName: "logs") pod "9ecc66ca-44d6-4220-9f1c-2b054239f484" (UID: "9ecc66ca-44d6-4220-9f1c-2b054239f484"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.831942 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecc66ca-44d6-4220-9f1c-2b054239f484-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9ecc66ca-44d6-4220-9f1c-2b054239f484" (UID: "9ecc66ca-44d6-4220-9f1c-2b054239f484"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.833806 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecc66ca-44d6-4220-9f1c-2b054239f484-kube-api-access-7rnp5" (OuterVolumeSpecName: "kube-api-access-7rnp5") pod "9ecc66ca-44d6-4220-9f1c-2b054239f484" (UID: "9ecc66ca-44d6-4220-9f1c-2b054239f484"). InnerVolumeSpecName "kube-api-access-7rnp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.851902 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-scripts" (OuterVolumeSpecName: "scripts") pod "9ecc66ca-44d6-4220-9f1c-2b054239f484" (UID: "9ecc66ca-44d6-4220-9f1c-2b054239f484"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.859915 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-config-data" (OuterVolumeSpecName: "config-data") pod "9ecc66ca-44d6-4220-9f1c-2b054239f484" (UID: "9ecc66ca-44d6-4220-9f1c-2b054239f484"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.931039 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.931071 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ecc66ca-44d6-4220-9f1c-2b054239f484-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.931083 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ecc66ca-44d6-4220-9f1c-2b054239f484-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.931112 4930 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ecc66ca-44d6-4220-9f1c-2b054239f484-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.931123 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rnp5\" (UniqueName: \"kubernetes.io/projected/9ecc66ca-44d6-4220-9f1c-2b054239f484-kube-api-access-7rnp5\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:25 crc kubenswrapper[4930]: I1124 12:17:25.936873 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.034132 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m78rf\" (UniqueName: \"kubernetes.io/projected/cbd922a6-f938-478a-8db2-d99dc37f3a69-kube-api-access-m78rf\") pod \"cbd922a6-f938-478a-8db2-d99dc37f3a69\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.034189 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbd922a6-f938-478a-8db2-d99dc37f3a69-logs\") pod \"cbd922a6-f938-478a-8db2-d99dc37f3a69\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.034329 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-config-data\") pod \"cbd922a6-f938-478a-8db2-d99dc37f3a69\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.034376 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cbd922a6-f938-478a-8db2-d99dc37f3a69-horizon-secret-key\") pod \"cbd922a6-f938-478a-8db2-d99dc37f3a69\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.034398 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-scripts\") pod \"cbd922a6-f938-478a-8db2-d99dc37f3a69\" (UID: \"cbd922a6-f938-478a-8db2-d99dc37f3a69\") " Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.034928 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd922a6-f938-478a-8db2-d99dc37f3a69-logs" (OuterVolumeSpecName: "logs") pod "cbd922a6-f938-478a-8db2-d99dc37f3a69" (UID: "cbd922a6-f938-478a-8db2-d99dc37f3a69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.058700 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd922a6-f938-478a-8db2-d99dc37f3a69-kube-api-access-m78rf" (OuterVolumeSpecName: "kube-api-access-m78rf") pod "cbd922a6-f938-478a-8db2-d99dc37f3a69" (UID: "cbd922a6-f938-478a-8db2-d99dc37f3a69"). InnerVolumeSpecName "kube-api-access-m78rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.060674 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd922a6-f938-478a-8db2-d99dc37f3a69-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cbd922a6-f938-478a-8db2-d99dc37f3a69" (UID: "cbd922a6-f938-478a-8db2-d99dc37f3a69"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.062000 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6dcc7c9c-rcxf9" event={"ID":"dced9a55-e50d-4a84-8876-25e6981347f8","Type":"ContainerDied","Data":"b27b34f1f43487985c502542542042a1344e660e413b3f934cb76b50ac3f0330"} Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.062061 4930 scope.go:117] "RemoveContainer" containerID="bee4a24e4de09e86532ea06bb6bc0154e5cec307ba7511473076aded1752631f" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.062203 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6dcc7c9c-rcxf9" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.080132 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-scripts" (OuterVolumeSpecName: "scripts") pod "cbd922a6-f938-478a-8db2-d99dc37f3a69" (UID: "cbd922a6-f938-478a-8db2-d99dc37f3a69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.090150 4930 generic.go:334] "Generic (PLEG): container finished" podID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" containerID="f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a" exitCode=0 Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.132715 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="ceilometer-central-agent" containerID="cri-o://1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb" gracePeriod=30 Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.133119 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="proxy-httpd" containerID="cri-o://df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40" gracePeriod=30 Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.133169 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="sg-core" containerID="cri-o://b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3" gracePeriod=30 Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.133201 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="ceilometer-notification-agent" containerID="cri-o://14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225" gracePeriod=30 Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.137822 4930 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cbd922a6-f938-478a-8db2-d99dc37f3a69-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.137847 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.137860 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m78rf\" (UniqueName: \"kubernetes.io/projected/cbd922a6-f938-478a-8db2-d99dc37f3a69-kube-api-access-m78rf\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.137872 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbd922a6-f938-478a-8db2-d99dc37f3a69-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.156355 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56d56cd8f5-hxqgp" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.168908 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-config-data" (OuterVolumeSpecName: "config-data") pod "cbd922a6-f938-478a-8db2-d99dc37f3a69" (UID: "cbd922a6-f938-478a-8db2-d99dc37f3a69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.175253 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.175284 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575d598bfb-msnzv" event={"ID":"54f78232-8dea-46dc-9fcd-b34fa6a4d400","Type":"ContainerDied","Data":"f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a"} Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.175304 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9","Type":"ContainerStarted","Data":"df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40"} Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.175315 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56d56cd8f5-hxqgp" event={"ID":"cbd922a6-f938-478a-8db2-d99dc37f3a69","Type":"ContainerDied","Data":"888b4987be8aebaeb2beffcba12a085a3d5413a7aff327992c12b22dfd451c8d"} Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.187178 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69679b8f55-8knvw" event={"ID":"9ecc66ca-44d6-4220-9f1c-2b054239f484","Type":"ContainerDied","Data":"a8cd585f53347b746b8db09b2655451576e6a0b42d8c497541831750a492d509"} Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.187281 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69679b8f55-8knvw" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.249472 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dfb57686-drx5k" event={"ID":"dc6e227e-e6f0-4244-80cf-67e85867d21f","Type":"ContainerDied","Data":"ae29de56d969ed151bb8105d6a40693f240ad53a2a94ca03f1354480e0b046ab"} Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.249888 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74dfb57686-drx5k" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.260450 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.289738 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbd922a6-f938-478a-8db2-d99dc37f3a69-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.290177 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.382735754 podStartE2EDuration="1m4.290146642s" podCreationTimestamp="2025-11-24 12:16:22 +0000 UTC" firstStartedPulling="2025-11-24 12:16:25.334047893 +0000 UTC m=+1031.948375843" lastFinishedPulling="2025-11-24 12:17:25.241458771 +0000 UTC m=+1091.855786731" observedRunningTime="2025-11-24 12:17:26.183043952 +0000 UTC m=+1092.797371892" watchObservedRunningTime="2025-11-24 12:17:26.290146642 +0000 UTC m=+1092.904474592" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.313004 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.330288 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b6dcc7c9c-rcxf9"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.338532 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-cjwcc"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.348605 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b6dcc7c9c-rcxf9"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.352270 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69679b8f55-8knvw"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.360605 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69679b8f55-8knvw"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.371925 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74dfb57686-drx5k"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.379278 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74dfb57686-drx5k"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.388463 4930 scope.go:117] "RemoveContainer" containerID="c724a9494e3a0adb1b5041991cef78607bc76e53098c5791432371f0b8a38e73" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.415439 4930 scope.go:117] "RemoveContainer" containerID="83e219941bf1309705d62b99197b40b292c80ca3cd3ed7de43869f46826a3910" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.493246 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56d56cd8f5-hxqgp"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.499779 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56d56cd8f5-hxqgp"] Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.602746 4930 scope.go:117] "RemoveContainer" containerID="7a9872195563ac3837b27c862b5fc468d87289f1bf166b50477b2445fe494f1e" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.666671 4930 scope.go:117] "RemoveContainer" containerID="81839e0c0639b58d676fca72d2b02c94deeff5bd06adc2f682f8411e51fd2ca0" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.873153 4930 scope.go:117] "RemoveContainer" containerID="95f3b7d02d3bc3c5dda4eb0b5d5bca10ac73478e4b03a3f2b832230e85f4141b" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.944388 4930 scope.go:117] "RemoveContainer" containerID="c5f426d1e1451e174d5b183a5d679db84119fe06fedbfffd340bb4a6006719fa" Nov 24 12:17:26 crc kubenswrapper[4930]: I1124 12:17:26.972667 4930 scope.go:117] "RemoveContainer" containerID="c8307f76e6b5ff9c0312c956e3bf480236593b6f373421abb7ecb13e9864d901" Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.262356 4930 generic.go:334] "Generic (PLEG): container finished" podID="77b75b6e-7ded-4307-8e62-b15ff18acffe" containerID="1a5f89f62e5f3d75aad6dcc3396a389246bdf3560cf8da0b3a75d9bf19059856" exitCode=0 Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.262419 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" event={"ID":"77b75b6e-7ded-4307-8e62-b15ff18acffe","Type":"ContainerDied","Data":"1a5f89f62e5f3d75aad6dcc3396a389246bdf3560cf8da0b3a75d9bf19059856"} Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.262441 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" event={"ID":"77b75b6e-7ded-4307-8e62-b15ff18acffe","Type":"ContainerStarted","Data":"694c66f04102d012f6397a3f6dcec2beec05223c0684690a8d6c15d5edb9e8cc"} Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.264415 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"21eda578-0e80-48af-a2ce-5eb783748a04","Type":"ContainerStarted","Data":"41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb"} Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.264446 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"21eda578-0e80-48af-a2ce-5eb783748a04","Type":"ContainerStarted","Data":"b46c2e26f243c2470e4086c0d57d02d64fced671ea7408c417e0c7dfa4ec956c"} Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.270351 4930 generic.go:334] "Generic (PLEG): container finished" podID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerID="df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40" exitCode=0 Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.270378 4930 generic.go:334] "Generic (PLEG): container finished" podID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerID="b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3" exitCode=2 Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.270388 4930 generic.go:334] "Generic (PLEG): container finished" podID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerID="1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb" exitCode=0 Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.270389 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9","Type":"ContainerDied","Data":"df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40"} Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.270424 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9","Type":"ContainerDied","Data":"b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3"} Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.270437 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9","Type":"ContainerDied","Data":"1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb"} Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.271404 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66691740-aef1-4155-b12a-a7ce7f9c5f93","Type":"ContainerStarted","Data":"0f04be63e020e0810f97362a5d82353bc12dce26603d979e979fa0362853bfa4"} Nov 24 12:17:27 crc kubenswrapper[4930]: I1124 12:17:27.948568 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.024992 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-log-httpd\") pod \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.025147 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-config-data\") pod \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.025227 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ww8l\" (UniqueName: \"kubernetes.io/projected/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-kube-api-access-8ww8l\") pod \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.025304 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-scripts\") pod \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.025473 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-combined-ca-bundle\") pod \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.025499 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-run-httpd\") pod \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.025559 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-sg-core-conf-yaml\") pod \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\" (UID: \"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9\") " Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.025589 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" (UID: "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.026068 4930 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.026321 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" (UID: "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.029793 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-kube-api-access-8ww8l" (OuterVolumeSpecName: "kube-api-access-8ww8l") pod "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" (UID: "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9"). InnerVolumeSpecName "kube-api-access-8ww8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.030890 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-scripts" (OuterVolumeSpecName: "scripts") pod "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" (UID: "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.096409 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" (UID: "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.096447 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ecc66ca-44d6-4220-9f1c-2b054239f484" path="/var/lib/kubelet/pods/9ecc66ca-44d6-4220-9f1c-2b054239f484/volumes" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.097215 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd922a6-f938-478a-8db2-d99dc37f3a69" path="/var/lib/kubelet/pods/cbd922a6-f938-478a-8db2-d99dc37f3a69/volumes" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.097883 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" path="/var/lib/kubelet/pods/dc6e227e-e6f0-4244-80cf-67e85867d21f/volumes" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.099083 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dced9a55-e50d-4a84-8876-25e6981347f8" path="/var/lib/kubelet/pods/dced9a55-e50d-4a84-8876-25e6981347f8/volumes" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.125261 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" (UID: "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.127492 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.127525 4930 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.127548 4930 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.127568 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ww8l\" (UniqueName: \"kubernetes.io/projected/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-kube-api-access-8ww8l\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.127580 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.129105 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-config-data" (OuterVolumeSpecName: "config-data") pod "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" (UID: "b694a6e6-54b3-4d0e-b80c-d05395c3e3b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.229496 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.283722 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66691740-aef1-4155-b12a-a7ce7f9c5f93","Type":"ContainerStarted","Data":"4b64a0fa45a98c56ded17ef620e0b17f14cb0c8ee949d9f5b426eccf1239ad31"} Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.287384 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" event={"ID":"77b75b6e-7ded-4307-8e62-b15ff18acffe","Type":"ContainerStarted","Data":"7ed6802db67ed4630c5d4a52cc5bcd91065f68bcf837bad5a238b1e052263046"} Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.287509 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.289423 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"21eda578-0e80-48af-a2ce-5eb783748a04","Type":"ContainerStarted","Data":"1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33"} Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.289584 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="21eda578-0e80-48af-a2ce-5eb783748a04" containerName="cinder-api-log" containerID="cri-o://41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb" gracePeriod=30 Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.289702 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.289755 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="21eda578-0e80-48af-a2ce-5eb783748a04" containerName="cinder-api" containerID="cri-o://1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33" gracePeriod=30 Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.293455 4930 generic.go:334] "Generic (PLEG): container finished" podID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerID="14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225" exitCode=0 Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.293907 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.293920 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9","Type":"ContainerDied","Data":"14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225"} Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.294363 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b694a6e6-54b3-4d0e-b80c-d05395c3e3b9","Type":"ContainerDied","Data":"c908d64474eb25486fdefb2cbd39d7d875305ce8740668855e34de9a11fa9270"} Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.294388 4930 scope.go:117] "RemoveContainer" containerID="df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.314314 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" podStartSLOduration=6.314295717 podStartE2EDuration="6.314295717s" podCreationTimestamp="2025-11-24 12:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:28.311128286 +0000 UTC m=+1094.925456236" watchObservedRunningTime="2025-11-24 12:17:28.314295717 +0000 UTC m=+1094.928623667" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.339704 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.339679917 podStartE2EDuration="6.339679917s" podCreationTimestamp="2025-11-24 12:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:28.33076297 +0000 UTC m=+1094.945090940" watchObservedRunningTime="2025-11-24 12:17:28.339679917 +0000 UTC m=+1094.954007877" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.354167 4930 scope.go:117] "RemoveContainer" containerID="b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.360629 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.383347 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.395651 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.395994 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd922a6-f938-478a-8db2-d99dc37f3a69" containerName="horizon" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396008 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd922a6-f938-478a-8db2-d99dc37f3a69" containerName="horizon" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396022 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerName="barbican-api" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396028 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerName="barbican-api" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396039 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecc66ca-44d6-4220-9f1c-2b054239f484" containerName="horizon-log" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396045 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecc66ca-44d6-4220-9f1c-2b054239f484" containerName="horizon-log" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396060 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="ceilometer-central-agent" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396065 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="ceilometer-central-agent" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396078 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dced9a55-e50d-4a84-8876-25e6981347f8" containerName="horizon-log" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396083 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="dced9a55-e50d-4a84-8876-25e6981347f8" containerName="horizon-log" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396096 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecc66ca-44d6-4220-9f1c-2b054239f484" containerName="horizon" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396102 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecc66ca-44d6-4220-9f1c-2b054239f484" containerName="horizon" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396112 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="ceilometer-notification-agent" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396117 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="ceilometer-notification-agent" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396129 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="sg-core" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396135 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="sg-core" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396145 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dced9a55-e50d-4a84-8876-25e6981347f8" containerName="horizon" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396150 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="dced9a55-e50d-4a84-8876-25e6981347f8" containerName="horizon" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396159 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd922a6-f938-478a-8db2-d99dc37f3a69" containerName="horizon-log" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396164 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd922a6-f938-478a-8db2-d99dc37f3a69" containerName="horizon-log" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396172 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerName="barbican-api-log" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396177 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerName="barbican-api-log" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.396191 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="proxy-httpd" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396196 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="proxy-httpd" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396348 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerName="barbican-api-log" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396361 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="proxy-httpd" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396374 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecc66ca-44d6-4220-9f1c-2b054239f484" containerName="horizon-log" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396382 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecc66ca-44d6-4220-9f1c-2b054239f484" containerName="horizon" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396388 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="dced9a55-e50d-4a84-8876-25e6981347f8" containerName="horizon" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396396 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="dced9a55-e50d-4a84-8876-25e6981347f8" containerName="horizon-log" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396406 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd922a6-f938-478a-8db2-d99dc37f3a69" containerName="horizon-log" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396415 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6e227e-e6f0-4244-80cf-67e85867d21f" containerName="barbican-api" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396431 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="ceilometer-central-agent" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396436 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="sg-core" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396446 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" containerName="ceilometer-notification-agent" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.396454 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd922a6-f938-478a-8db2-d99dc37f3a69" containerName="horizon" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.404082 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.407422 4930 scope.go:117] "RemoveContainer" containerID="14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.407910 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.408191 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.418855 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.432475 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsg4p\" (UniqueName: \"kubernetes.io/projected/d53100f9-6ba2-48da-9836-f05692e91a3b-kube-api-access-jsg4p\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.432549 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-config-data\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.432574 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.432616 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-run-httpd\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.432644 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-scripts\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.432685 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-log-httpd\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.432703 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.455901 4930 scope.go:117] "RemoveContainer" containerID="1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.485161 4930 scope.go:117] "RemoveContainer" containerID="df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.485950 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40\": container with ID starting with df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40 not found: ID does not exist" containerID="df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.486011 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40"} err="failed to get container status \"df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40\": rpc error: code = NotFound desc = could not find container \"df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40\": container with ID starting with df9bf95788fd892c27464b524bf98e3fcbf81b84ecbd807f20a1fd20356aac40 not found: ID does not exist" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.486044 4930 scope.go:117] "RemoveContainer" containerID="b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.486591 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3\": container with ID starting with b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3 not found: ID does not exist" containerID="b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.486625 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3"} err="failed to get container status \"b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3\": rpc error: code = NotFound desc = could not find container \"b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3\": container with ID starting with b7e80d81acb3786253db05cc57e95eb06d59a40fc51ae12a1de6cd67f028f3e3 not found: ID does not exist" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.486643 4930 scope.go:117] "RemoveContainer" containerID="14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.486963 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225\": container with ID starting with 14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225 not found: ID does not exist" containerID="14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.486993 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225"} err="failed to get container status \"14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225\": rpc error: code = NotFound desc = could not find container \"14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225\": container with ID starting with 14eeb39c18d0d6800e9af63273d2543b15cb4bb5750f9ee1005951b1bd290225 not found: ID does not exist" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.487009 4930 scope.go:117] "RemoveContainer" containerID="1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb" Nov 24 12:17:28 crc kubenswrapper[4930]: E1124 12:17:28.487236 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb\": container with ID starting with 1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb not found: ID does not exist" containerID="1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.487261 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb"} err="failed to get container status \"1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb\": rpc error: code = NotFound desc = could not find container \"1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb\": container with ID starting with 1bb57ec1407cd77090369ffa5162d096f0f971c81613b78b630c8e1f554d5cfb not found: ID does not exist" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.534834 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-config-data\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.534889 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.534938 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-run-httpd\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.534965 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-scripts\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.535010 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-log-httpd\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.535026 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.535088 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsg4p\" (UniqueName: \"kubernetes.io/projected/d53100f9-6ba2-48da-9836-f05692e91a3b-kube-api-access-jsg4p\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.536437 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-log-httpd\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.537012 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-run-httpd\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.541286 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.541508 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.541584 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-scripts\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.541832 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-config-data\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.553184 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsg4p\" (UniqueName: \"kubernetes.io/projected/d53100f9-6ba2-48da-9836-f05692e91a3b-kube-api-access-jsg4p\") pod \"ceilometer-0\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.744455 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:28 crc kubenswrapper[4930]: I1124 12:17:28.974134 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.053498 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbn9x\" (UniqueName: \"kubernetes.io/projected/21eda578-0e80-48af-a2ce-5eb783748a04-kube-api-access-hbn9x\") pod \"21eda578-0e80-48af-a2ce-5eb783748a04\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.053716 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21eda578-0e80-48af-a2ce-5eb783748a04-etc-machine-id\") pod \"21eda578-0e80-48af-a2ce-5eb783748a04\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.053773 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21eda578-0e80-48af-a2ce-5eb783748a04-logs\") pod \"21eda578-0e80-48af-a2ce-5eb783748a04\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.053900 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-scripts\") pod \"21eda578-0e80-48af-a2ce-5eb783748a04\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.054044 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data-custom\") pod \"21eda578-0e80-48af-a2ce-5eb783748a04\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.054091 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-combined-ca-bundle\") pod \"21eda578-0e80-48af-a2ce-5eb783748a04\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.054116 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data\") pod \"21eda578-0e80-48af-a2ce-5eb783748a04\" (UID: \"21eda578-0e80-48af-a2ce-5eb783748a04\") " Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.055656 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21eda578-0e80-48af-a2ce-5eb783748a04-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "21eda578-0e80-48af-a2ce-5eb783748a04" (UID: "21eda578-0e80-48af-a2ce-5eb783748a04"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.055973 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21eda578-0e80-48af-a2ce-5eb783748a04-logs" (OuterVolumeSpecName: "logs") pod "21eda578-0e80-48af-a2ce-5eb783748a04" (UID: "21eda578-0e80-48af-a2ce-5eb783748a04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.061037 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21eda578-0e80-48af-a2ce-5eb783748a04-kube-api-access-hbn9x" (OuterVolumeSpecName: "kube-api-access-hbn9x") pod "21eda578-0e80-48af-a2ce-5eb783748a04" (UID: "21eda578-0e80-48af-a2ce-5eb783748a04"). InnerVolumeSpecName "kube-api-access-hbn9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.062605 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-scripts" (OuterVolumeSpecName: "scripts") pod "21eda578-0e80-48af-a2ce-5eb783748a04" (UID: "21eda578-0e80-48af-a2ce-5eb783748a04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.081753 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "21eda578-0e80-48af-a2ce-5eb783748a04" (UID: "21eda578-0e80-48af-a2ce-5eb783748a04"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.085929 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21eda578-0e80-48af-a2ce-5eb783748a04" (UID: "21eda578-0e80-48af-a2ce-5eb783748a04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.102277 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data" (OuterVolumeSpecName: "config-data") pod "21eda578-0e80-48af-a2ce-5eb783748a04" (UID: "21eda578-0e80-48af-a2ce-5eb783748a04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.156449 4930 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.156488 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.156498 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.156506 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbn9x\" (UniqueName: \"kubernetes.io/projected/21eda578-0e80-48af-a2ce-5eb783748a04-kube-api-access-hbn9x\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.156516 4930 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21eda578-0e80-48af-a2ce-5eb783748a04-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.156524 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21eda578-0e80-48af-a2ce-5eb783748a04-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.156546 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21eda578-0e80-48af-a2ce-5eb783748a04-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.195844 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.305901 4930 generic.go:334] "Generic (PLEG): container finished" podID="21eda578-0e80-48af-a2ce-5eb783748a04" containerID="1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33" exitCode=0 Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.305938 4930 generic.go:334] "Generic (PLEG): container finished" podID="21eda578-0e80-48af-a2ce-5eb783748a04" containerID="41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb" exitCode=143 Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.305949 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.305990 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"21eda578-0e80-48af-a2ce-5eb783748a04","Type":"ContainerDied","Data":"1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33"} Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.306018 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"21eda578-0e80-48af-a2ce-5eb783748a04","Type":"ContainerDied","Data":"41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb"} Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.306030 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"21eda578-0e80-48af-a2ce-5eb783748a04","Type":"ContainerDied","Data":"b46c2e26f243c2470e4086c0d57d02d64fced671ea7408c417e0c7dfa4ec956c"} Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.306046 4930 scope.go:117] "RemoveContainer" containerID="1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.309507 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d53100f9-6ba2-48da-9836-f05692e91a3b","Type":"ContainerStarted","Data":"55c0e205a3abcaf28e775c1e8003b709ab7d35ceb15bd741c62f3b142bb309f7"} Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.312386 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66691740-aef1-4155-b12a-a7ce7f9c5f93","Type":"ContainerStarted","Data":"9c58db8a44be4232eb42974f9aec757b82505d784396c5ba789585e8033f9acc"} Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.340851 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.224535036 podStartE2EDuration="7.340833381s" podCreationTimestamp="2025-11-24 12:17:22 +0000 UTC" firstStartedPulling="2025-11-24 12:17:26.415430175 +0000 UTC m=+1093.029758125" lastFinishedPulling="2025-11-24 12:17:27.53172852 +0000 UTC m=+1094.146056470" observedRunningTime="2025-11-24 12:17:29.337619678 +0000 UTC m=+1095.951947648" watchObservedRunningTime="2025-11-24 12:17:29.340833381 +0000 UTC m=+1095.955161331" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.344385 4930 scope.go:117] "RemoveContainer" containerID="41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.361678 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.377972 4930 scope.go:117] "RemoveContainer" containerID="1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33" Nov 24 12:17:29 crc kubenswrapper[4930]: E1124 12:17:29.378522 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33\": container with ID starting with 1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33 not found: ID does not exist" containerID="1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.378620 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33"} err="failed to get container status \"1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33\": rpc error: code = NotFound desc = could not find container \"1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33\": container with ID starting with 1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33 not found: ID does not exist" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.378646 4930 scope.go:117] "RemoveContainer" containerID="41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb" Nov 24 12:17:29 crc kubenswrapper[4930]: E1124 12:17:29.379024 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb\": container with ID starting with 41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb not found: ID does not exist" containerID="41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.379058 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb"} err="failed to get container status \"41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb\": rpc error: code = NotFound desc = could not find container \"41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb\": container with ID starting with 41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb not found: ID does not exist" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.379081 4930 scope.go:117] "RemoveContainer" containerID="1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.379412 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33"} err="failed to get container status \"1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33\": rpc error: code = NotFound desc = could not find container \"1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33\": container with ID starting with 1759c4c7a1cd47fea4904411bba8ef38db631550ec76014a6a15d88a3bdead33 not found: ID does not exist" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.379459 4930 scope.go:117] "RemoveContainer" containerID="41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.379826 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb"} err="failed to get container status \"41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb\": rpc error: code = NotFound desc = could not find container \"41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb\": container with ID starting with 41efe818f3d8adede03ceb12e012edbbf40e3b4cdf26857a0a06814e0bea40fb not found: ID does not exist" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.390681 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.398430 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:17:29 crc kubenswrapper[4930]: E1124 12:17:29.398857 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21eda578-0e80-48af-a2ce-5eb783748a04" containerName="cinder-api" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.398884 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="21eda578-0e80-48af-a2ce-5eb783748a04" containerName="cinder-api" Nov 24 12:17:29 crc kubenswrapper[4930]: E1124 12:17:29.398904 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21eda578-0e80-48af-a2ce-5eb783748a04" containerName="cinder-api-log" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.398912 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="21eda578-0e80-48af-a2ce-5eb783748a04" containerName="cinder-api-log" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.399193 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="21eda578-0e80-48af-a2ce-5eb783748a04" containerName="cinder-api-log" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.399223 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="21eda578-0e80-48af-a2ce-5eb783748a04" containerName="cinder-api" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.400413 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.405605 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.405641 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.405710 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.407661 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.461382 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.461463 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-config-data\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.461492 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.461634 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a527a579-00ed-4438-b675-70c5baefb0d9-logs\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.461736 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a527a579-00ed-4438-b675-70c5baefb0d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.461774 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.461884 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c26zj\" (UniqueName: \"kubernetes.io/projected/a527a579-00ed-4438-b675-70c5baefb0d9-kube-api-access-c26zj\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.461958 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.462023 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-scripts\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.563658 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a527a579-00ed-4438-b675-70c5baefb0d9-logs\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.563736 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a527a579-00ed-4438-b675-70c5baefb0d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.563769 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.563823 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c26zj\" (UniqueName: \"kubernetes.io/projected/a527a579-00ed-4438-b675-70c5baefb0d9-kube-api-access-c26zj\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.563840 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a527a579-00ed-4438-b675-70c5baefb0d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.563865 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.563921 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-scripts\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.563972 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.564010 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-config-data\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.564036 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.564807 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a527a579-00ed-4438-b675-70c5baefb0d9-logs\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.567196 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.567473 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.569482 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.570233 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-scripts\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.570401 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.572293 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a527a579-00ed-4438-b675-70c5baefb0d9-config-data\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.584924 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c26zj\" (UniqueName: \"kubernetes.io/projected/a527a579-00ed-4438-b675-70c5baefb0d9-kube-api-access-c26zj\") pod \"cinder-api-0\" (UID: \"a527a579-00ed-4438-b675-70c5baefb0d9\") " pod="openstack/cinder-api-0" Nov 24 12:17:29 crc kubenswrapper[4930]: I1124 12:17:29.716119 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:17:30 crc kubenswrapper[4930]: I1124 12:17:30.095590 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21eda578-0e80-48af-a2ce-5eb783748a04" path="/var/lib/kubelet/pods/21eda578-0e80-48af-a2ce-5eb783748a04/volumes" Nov 24 12:17:30 crc kubenswrapper[4930]: I1124 12:17:30.096915 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b694a6e6-54b3-4d0e-b80c-d05395c3e3b9" path="/var/lib/kubelet/pods/b694a6e6-54b3-4d0e-b80c-d05395c3e3b9/volumes" Nov 24 12:17:30 crc kubenswrapper[4930]: I1124 12:17:30.203292 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:17:30 crc kubenswrapper[4930]: I1124 12:17:30.324196 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a527a579-00ed-4438-b675-70c5baefb0d9","Type":"ContainerStarted","Data":"1895602bb7867a20b1b0d3d6b50f39fda0e6d024a1ea65d472d703fb41137331"} Nov 24 12:17:30 crc kubenswrapper[4930]: I1124 12:17:30.325822 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d53100f9-6ba2-48da-9836-f05692e91a3b","Type":"ContainerStarted","Data":"a48a0c4801e1d32921412b06fa07f05d5a6a7e40f4661cf2a674d237b9cbfa50"} Nov 24 12:17:31 crc kubenswrapper[4930]: I1124 12:17:31.338768 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a527a579-00ed-4438-b675-70c5baefb0d9","Type":"ContainerStarted","Data":"caa46e1023417ba1616fc133d029b20b6b32cd7e94dda92ea41bf5be40da98c7"} Nov 24 12:17:31 crc kubenswrapper[4930]: I1124 12:17:31.341286 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d53100f9-6ba2-48da-9836-f05692e91a3b","Type":"ContainerStarted","Data":"9847c9f527a51178305a432002f3434205d3ea85a96c9f48a7dae6a153e33dce"} Nov 24 12:17:31 crc kubenswrapper[4930]: I1124 12:17:31.341335 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d53100f9-6ba2-48da-9836-f05692e91a3b","Type":"ContainerStarted","Data":"df357430ae01583440a5d4af5d671f0d09faa0abc3530b47c988b08551167fb3"} Nov 24 12:17:32 crc kubenswrapper[4930]: I1124 12:17:32.355124 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a527a579-00ed-4438-b675-70c5baefb0d9","Type":"ContainerStarted","Data":"4a4831563ed41220c5352aa5ebd5213cf3c651bf026e587e1565e525eedfa11b"} Nov 24 12:17:32 crc kubenswrapper[4930]: I1124 12:17:32.356182 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 12:17:32 crc kubenswrapper[4930]: I1124 12:17:32.392612 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.3925943 podStartE2EDuration="3.3925943s" podCreationTimestamp="2025-11-24 12:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:32.375689654 +0000 UTC m=+1098.990017604" watchObservedRunningTime="2025-11-24 12:17:32.3925943 +0000 UTC m=+1099.006922250" Nov 24 12:17:32 crc kubenswrapper[4930]: I1124 12:17:32.889970 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:17:32 crc kubenswrapper[4930]: I1124 12:17:32.927185 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 12:17:32 crc kubenswrapper[4930]: I1124 12:17:32.970965 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-2m4rx"] Nov 24 12:17:32 crc kubenswrapper[4930]: I1124 12:17:32.971319 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" podUID="9299ea16-3ac9-4356-916d-663e04e08206" containerName="dnsmasq-dns" containerID="cri-o://72e00ea989d6521591f90edae839633b114a25077ccf4434b538ebec9202e01c" gracePeriod=10 Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.391863 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d53100f9-6ba2-48da-9836-f05692e91a3b","Type":"ContainerStarted","Data":"7ef3afbdb73787688fcec56267c1026d618458646ee942e22bcffe9706e17770"} Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.393312 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.424477 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.308650015 podStartE2EDuration="5.424454647s" podCreationTimestamp="2025-11-24 12:17:28 +0000 UTC" firstStartedPulling="2025-11-24 12:17:29.197784106 +0000 UTC m=+1095.812112056" lastFinishedPulling="2025-11-24 12:17:32.313588738 +0000 UTC m=+1098.927916688" observedRunningTime="2025-11-24 12:17:33.422065098 +0000 UTC m=+1100.036393048" watchObservedRunningTime="2025-11-24 12:17:33.424454647 +0000 UTC m=+1100.038782597" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.425749 4930 generic.go:334] "Generic (PLEG): container finished" podID="9299ea16-3ac9-4356-916d-663e04e08206" containerID="72e00ea989d6521591f90edae839633b114a25077ccf4434b538ebec9202e01c" exitCode=0 Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.426475 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" event={"ID":"9299ea16-3ac9-4356-916d-663e04e08206","Type":"ContainerDied","Data":"72e00ea989d6521591f90edae839633b114a25077ccf4434b538ebec9202e01c"} Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.500854 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.570281 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79nrw\" (UniqueName: \"kubernetes.io/projected/9299ea16-3ac9-4356-916d-663e04e08206-kube-api-access-79nrw\") pod \"9299ea16-3ac9-4356-916d-663e04e08206\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.570331 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-nb\") pod \"9299ea16-3ac9-4356-916d-663e04e08206\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.570440 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-swift-storage-0\") pod \"9299ea16-3ac9-4356-916d-663e04e08206\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.570523 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-sb\") pod \"9299ea16-3ac9-4356-916d-663e04e08206\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.570571 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-svc\") pod \"9299ea16-3ac9-4356-916d-663e04e08206\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.570619 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-config\") pod \"9299ea16-3ac9-4356-916d-663e04e08206\" (UID: \"9299ea16-3ac9-4356-916d-663e04e08206\") " Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.581976 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9299ea16-3ac9-4356-916d-663e04e08206-kube-api-access-79nrw" (OuterVolumeSpecName: "kube-api-access-79nrw") pod "9299ea16-3ac9-4356-916d-663e04e08206" (UID: "9299ea16-3ac9-4356-916d-663e04e08206"). InnerVolumeSpecName "kube-api-access-79nrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.643301 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-config" (OuterVolumeSpecName: "config") pod "9299ea16-3ac9-4356-916d-663e04e08206" (UID: "9299ea16-3ac9-4356-916d-663e04e08206"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.652828 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9299ea16-3ac9-4356-916d-663e04e08206" (UID: "9299ea16-3ac9-4356-916d-663e04e08206"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.658960 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9299ea16-3ac9-4356-916d-663e04e08206" (UID: "9299ea16-3ac9-4356-916d-663e04e08206"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.662175 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9299ea16-3ac9-4356-916d-663e04e08206" (UID: "9299ea16-3ac9-4356-916d-663e04e08206"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.668178 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9299ea16-3ac9-4356-916d-663e04e08206" (UID: "9299ea16-3ac9-4356-916d-663e04e08206"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.672934 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.672973 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79nrw\" (UniqueName: \"kubernetes.io/projected/9299ea16-3ac9-4356-916d-663e04e08206-kube-api-access-79nrw\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.672986 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.672994 4930 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.673003 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:33 crc kubenswrapper[4930]: I1124 12:17:33.673010 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9299ea16-3ac9-4356-916d-663e04e08206-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.039104 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.058944 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7d65b7d547-xbx74" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.184198 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-httpd-config\") pod \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.184340 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx9rm\" (UniqueName: \"kubernetes.io/projected/54f78232-8dea-46dc-9fcd-b34fa6a4d400-kube-api-access-nx9rm\") pod \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.184436 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-config\") pod \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.184462 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-combined-ca-bundle\") pod \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.184580 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-ovndb-tls-certs\") pod \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\" (UID: \"54f78232-8dea-46dc-9fcd-b34fa6a4d400\") " Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.214786 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f78232-8dea-46dc-9fcd-b34fa6a4d400-kube-api-access-nx9rm" (OuterVolumeSpecName: "kube-api-access-nx9rm") pod "54f78232-8dea-46dc-9fcd-b34fa6a4d400" (UID: "54f78232-8dea-46dc-9fcd-b34fa6a4d400"). InnerVolumeSpecName "kube-api-access-nx9rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.214907 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "54f78232-8dea-46dc-9fcd-b34fa6a4d400" (UID: "54f78232-8dea-46dc-9fcd-b34fa6a4d400"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.272833 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-config" (OuterVolumeSpecName: "config") pod "54f78232-8dea-46dc-9fcd-b34fa6a4d400" (UID: "54f78232-8dea-46dc-9fcd-b34fa6a4d400"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.294336 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54f78232-8dea-46dc-9fcd-b34fa6a4d400" (UID: "54f78232-8dea-46dc-9fcd-b34fa6a4d400"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.298088 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.298126 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.298142 4930 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.298154 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx9rm\" (UniqueName: \"kubernetes.io/projected/54f78232-8dea-46dc-9fcd-b34fa6a4d400-kube-api-access-nx9rm\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.322855 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.338228 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "54f78232-8dea-46dc-9fcd-b34fa6a4d400" (UID: "54f78232-8dea-46dc-9fcd-b34fa6a4d400"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.401189 4930 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54f78232-8dea-46dc-9fcd-b34fa6a4d400-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.437496 4930 generic.go:334] "Generic (PLEG): container finished" podID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" containerID="84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f" exitCode=0 Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.437576 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575d598bfb-msnzv" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.437595 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575d598bfb-msnzv" event={"ID":"54f78232-8dea-46dc-9fcd-b34fa6a4d400","Type":"ContainerDied","Data":"84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f"} Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.438061 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575d598bfb-msnzv" event={"ID":"54f78232-8dea-46dc-9fcd-b34fa6a4d400","Type":"ContainerDied","Data":"6a7f3c5a3e7221f7721a273f796e214c82181a6e5e5f0424692ccbf0f35c3692"} Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.438114 4930 scope.go:117] "RemoveContainer" containerID="f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.440756 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.440743 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-2m4rx" event={"ID":"9299ea16-3ac9-4356-916d-663e04e08206","Type":"ContainerDied","Data":"3cb628c68bb5f86b49a65acda8649f9e6a280118822ec456d5e4492b1016f8af"} Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.473024 4930 scope.go:117] "RemoveContainer" containerID="84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.475597 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-2m4rx"] Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.483607 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-2m4rx"] Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.490235 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-575d598bfb-msnzv"] Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.494899 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-575d598bfb-msnzv"] Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.499263 4930 scope.go:117] "RemoveContainer" containerID="f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a" Nov 24 12:17:34 crc kubenswrapper[4930]: E1124 12:17:34.501742 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a\": container with ID starting with f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a not found: ID does not exist" containerID="f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.501791 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a"} err="failed to get container status \"f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a\": rpc error: code = NotFound desc = could not find container \"f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a\": container with ID starting with f87588fcc1ea10945f088f6e47d7b8528bd4dd874d7d8c771e467e2377bee57a not found: ID does not exist" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.501822 4930 scope.go:117] "RemoveContainer" containerID="84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f" Nov 24 12:17:34 crc kubenswrapper[4930]: E1124 12:17:34.502263 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f\": container with ID starting with 84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f not found: ID does not exist" containerID="84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.502297 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f"} err="failed to get container status \"84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f\": rpc error: code = NotFound desc = could not find container \"84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f\": container with ID starting with 84dbd81a4e3885f82029e654f3d6f7085284e80b89db27ed085afe62cdfea73f not found: ID does not exist" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.502318 4930 scope.go:117] "RemoveContainer" containerID="72e00ea989d6521591f90edae839633b114a25077ccf4434b538ebec9202e01c" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.525038 4930 scope.go:117] "RemoveContainer" containerID="ca156202cb7c8fbbcad73bdc708fb44228e24ed48b067d8dea16447b768f9512" Nov 24 12:17:34 crc kubenswrapper[4930]: I1124 12:17:34.725109 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:17:35 crc kubenswrapper[4930]: I1124 12:17:35.303825 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:35 crc kubenswrapper[4930]: I1124 12:17:35.309820 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-784c754f4d-ttmj6" Nov 24 12:17:36 crc kubenswrapper[4930]: I1124 12:17:36.097532 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" path="/var/lib/kubelet/pods/54f78232-8dea-46dc-9fcd-b34fa6a4d400/volumes" Nov 24 12:17:36 crc kubenswrapper[4930]: I1124 12:17:36.098243 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9299ea16-3ac9-4356-916d-663e04e08206" path="/var/lib/kubelet/pods/9299ea16-3ac9-4356-916d-663e04e08206/volumes" Nov 24 12:17:36 crc kubenswrapper[4930]: I1124 12:17:36.395983 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:17:36 crc kubenswrapper[4930]: I1124 12:17:36.594115 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b7594b454-4gfnw" Nov 24 12:17:36 crc kubenswrapper[4930]: I1124 12:17:36.660197 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69b96dd4dd-2xcvn"] Nov 24 12:17:36 crc kubenswrapper[4930]: I1124 12:17:36.660671 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69b96dd4dd-2xcvn" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon" containerID="cri-o://1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f" gracePeriod=30 Nov 24 12:17:36 crc kubenswrapper[4930]: I1124 12:17:36.660885 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69b96dd4dd-2xcvn" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon-log" containerID="cri-o://8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90" gracePeriod=30 Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.169881 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.226344 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.247769 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 12:17:38 crc kubenswrapper[4930]: E1124 12:17:38.248151 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" containerName="neutron-httpd" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.248173 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" containerName="neutron-httpd" Nov 24 12:17:38 crc kubenswrapper[4930]: E1124 12:17:38.248191 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9299ea16-3ac9-4356-916d-663e04e08206" containerName="init" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.248196 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="9299ea16-3ac9-4356-916d-663e04e08206" containerName="init" Nov 24 12:17:38 crc kubenswrapper[4930]: E1124 12:17:38.248223 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" containerName="neutron-api" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.248229 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" containerName="neutron-api" Nov 24 12:17:38 crc kubenswrapper[4930]: E1124 12:17:38.248243 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9299ea16-3ac9-4356-916d-663e04e08206" containerName="dnsmasq-dns" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.248248 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="9299ea16-3ac9-4356-916d-663e04e08206" containerName="dnsmasq-dns" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.248410 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" containerName="neutron-httpd" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.248423 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f78232-8dea-46dc-9fcd-b34fa6a4d400" containerName="neutron-api" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.248441 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="9299ea16-3ac9-4356-916d-663e04e08206" containerName="dnsmasq-dns" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.249087 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.251599 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.251654 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-8mjwj" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.251922 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.281093 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.394120 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.394446 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config-secret\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.394468 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68cf9\" (UniqueName: \"kubernetes.io/projected/f54ebfcc-40f5-43df-8592-64d05b173cd5-kube-api-access-68cf9\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.394582 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.417577 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 24 12:17:38 crc kubenswrapper[4930]: E1124 12:17:38.418372 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-68cf9 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="f54ebfcc-40f5-43df-8592-64d05b173cd5" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.425122 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.477381 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.482735 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.495415 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.495982 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="66691740-aef1-4155-b12a-a7ce7f9c5f93" containerName="cinder-scheduler" containerID="cri-o://4b64a0fa45a98c56ded17ef620e0b17f14cb0c8ee949d9f5b426eccf1239ad31" gracePeriod=30 Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.496150 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="66691740-aef1-4155-b12a-a7ce7f9c5f93" containerName="probe" containerID="cri-o://9c58db8a44be4232eb42974f9aec757b82505d784396c5ba789585e8033f9acc" gracePeriod=30 Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.497522 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config-secret\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.497591 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68cf9\" (UniqueName: \"kubernetes.io/projected/f54ebfcc-40f5-43df-8592-64d05b173cd5-kube-api-access-68cf9\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.497715 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.497743 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: E1124 12:17:38.500030 4930 projected.go:194] Error preparing data for projected volume kube-api-access-68cf9 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (f54ebfcc-40f5-43df-8592-64d05b173cd5) does not match the UID in record. The object might have been deleted and then recreated Nov 24 12:17:38 crc kubenswrapper[4930]: E1124 12:17:38.500087 4930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f54ebfcc-40f5-43df-8592-64d05b173cd5-kube-api-access-68cf9 podName:f54ebfcc-40f5-43df-8592-64d05b173cd5 nodeName:}" failed. No retries permitted until 2025-11-24 12:17:39.000072083 +0000 UTC m=+1105.614400033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-68cf9" (UniqueName: "kubernetes.io/projected/f54ebfcc-40f5-43df-8592-64d05b173cd5-kube-api-access-68cf9") pod "openstackclient" (UID: "f54ebfcc-40f5-43df-8592-64d05b173cd5") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (f54ebfcc-40f5-43df-8592-64d05b173cd5) does not match the UID in record. The object might have been deleted and then recreated Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.500730 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.507278 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.508214 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config-secret\") pod \"openstackclient\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.515250 4930 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="f54ebfcc-40f5-43df-8592-64d05b173cd5" podUID="1416edd0-b4e2-4acb-a449-1e9d40e9b2f5" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.519241 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.563221 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.599568 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.599757 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-openstack-config\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.599859 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-openstack-config-secret\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.600171 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfnwr\" (UniqueName: \"kubernetes.io/projected/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-kube-api-access-zfnwr\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.702223 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config-secret\") pod \"f54ebfcc-40f5-43df-8592-64d05b173cd5\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.702396 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config\") pod \"f54ebfcc-40f5-43df-8592-64d05b173cd5\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.702475 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-combined-ca-bundle\") pod \"f54ebfcc-40f5-43df-8592-64d05b173cd5\" (UID: \"f54ebfcc-40f5-43df-8592-64d05b173cd5\") " Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.702919 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f54ebfcc-40f5-43df-8592-64d05b173cd5" (UID: "f54ebfcc-40f5-43df-8592-64d05b173cd5"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.703037 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfnwr\" (UniqueName: \"kubernetes.io/projected/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-kube-api-access-zfnwr\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.703117 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.703264 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-openstack-config\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.703318 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-openstack-config-secret\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.703395 4930 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.703412 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68cf9\" (UniqueName: \"kubernetes.io/projected/f54ebfcc-40f5-43df-8592-64d05b173cd5-kube-api-access-68cf9\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.705262 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-openstack-config\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.707472 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f54ebfcc-40f5-43df-8592-64d05b173cd5" (UID: "f54ebfcc-40f5-43df-8592-64d05b173cd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.707768 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f54ebfcc-40f5-43df-8592-64d05b173cd5" (UID: "f54ebfcc-40f5-43df-8592-64d05b173cd5"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.710253 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.711650 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-openstack-config-secret\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.729219 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfnwr\" (UniqueName: \"kubernetes.io/projected/1416edd0-b4e2-4acb-a449-1e9d40e9b2f5-kube-api-access-zfnwr\") pod \"openstackclient\" (UID: \"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5\") " pod="openstack/openstackclient" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.806391 4930 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.806455 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54ebfcc-40f5-43df-8592-64d05b173cd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:38 crc kubenswrapper[4930]: I1124 12:17:38.860826 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.416054 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.548746 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5","Type":"ContainerStarted","Data":"d34a37a2918722ca5d4765ed371dd84108cff99612cfc6aa2a4bfed442418445"} Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.580030 4930 generic.go:334] "Generic (PLEG): container finished" podID="66691740-aef1-4155-b12a-a7ce7f9c5f93" containerID="9c58db8a44be4232eb42974f9aec757b82505d784396c5ba789585e8033f9acc" exitCode=0 Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.580253 4930 generic.go:334] "Generic (PLEG): container finished" podID="66691740-aef1-4155-b12a-a7ce7f9c5f93" containerID="4b64a0fa45a98c56ded17ef620e0b17f14cb0c8ee949d9f5b426eccf1239ad31" exitCode=0 Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.580422 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.580603 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66691740-aef1-4155-b12a-a7ce7f9c5f93","Type":"ContainerDied","Data":"9c58db8a44be4232eb42974f9aec757b82505d784396c5ba789585e8033f9acc"} Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.580671 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66691740-aef1-4155-b12a-a7ce7f9c5f93","Type":"ContainerDied","Data":"4b64a0fa45a98c56ded17ef620e0b17f14cb0c8ee949d9f5b426eccf1239ad31"} Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.590103 4930 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="f54ebfcc-40f5-43df-8592-64d05b173cd5" podUID="1416edd0-b4e2-4acb-a449-1e9d40e9b2f5" Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.814359 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.935934 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data\") pod \"66691740-aef1-4155-b12a-a7ce7f9c5f93\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.935990 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-combined-ca-bundle\") pod \"66691740-aef1-4155-b12a-a7ce7f9c5f93\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.936014 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2s92\" (UniqueName: \"kubernetes.io/projected/66691740-aef1-4155-b12a-a7ce7f9c5f93-kube-api-access-h2s92\") pod \"66691740-aef1-4155-b12a-a7ce7f9c5f93\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.936039 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-scripts\") pod \"66691740-aef1-4155-b12a-a7ce7f9c5f93\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.936068 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data-custom\") pod \"66691740-aef1-4155-b12a-a7ce7f9c5f93\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.936107 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66691740-aef1-4155-b12a-a7ce7f9c5f93-etc-machine-id\") pod \"66691740-aef1-4155-b12a-a7ce7f9c5f93\" (UID: \"66691740-aef1-4155-b12a-a7ce7f9c5f93\") " Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.938162 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66691740-aef1-4155-b12a-a7ce7f9c5f93-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "66691740-aef1-4155-b12a-a7ce7f9c5f93" (UID: "66691740-aef1-4155-b12a-a7ce7f9c5f93"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.946815 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "66691740-aef1-4155-b12a-a7ce7f9c5f93" (UID: "66691740-aef1-4155-b12a-a7ce7f9c5f93"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.951031 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66691740-aef1-4155-b12a-a7ce7f9c5f93-kube-api-access-h2s92" (OuterVolumeSpecName: "kube-api-access-h2s92") pod "66691740-aef1-4155-b12a-a7ce7f9c5f93" (UID: "66691740-aef1-4155-b12a-a7ce7f9c5f93"). InnerVolumeSpecName "kube-api-access-h2s92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:39 crc kubenswrapper[4930]: I1124 12:17:39.956790 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-scripts" (OuterVolumeSpecName: "scripts") pod "66691740-aef1-4155-b12a-a7ce7f9c5f93" (UID: "66691740-aef1-4155-b12a-a7ce7f9c5f93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.011693 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66691740-aef1-4155-b12a-a7ce7f9c5f93" (UID: "66691740-aef1-4155-b12a-a7ce7f9c5f93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.041953 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.042315 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2s92\" (UniqueName: \"kubernetes.io/projected/66691740-aef1-4155-b12a-a7ce7f9c5f93-kube-api-access-h2s92\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.042331 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.042345 4930 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.042359 4930 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66691740-aef1-4155-b12a-a7ce7f9c5f93-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.046750 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data" (OuterVolumeSpecName: "config-data") pod "66691740-aef1-4155-b12a-a7ce7f9c5f93" (UID: "66691740-aef1-4155-b12a-a7ce7f9c5f93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.096924 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f54ebfcc-40f5-43df-8592-64d05b173cd5" path="/var/lib/kubelet/pods/f54ebfcc-40f5-43df-8592-64d05b173cd5/volumes" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.144966 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66691740-aef1-4155-b12a-a7ce7f9c5f93-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.594763 4930 generic.go:334] "Generic (PLEG): container finished" podID="dc1269fb-938b-4634-a683-9b0375e01915" containerID="1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f" exitCode=0 Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.594851 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69b96dd4dd-2xcvn" event={"ID":"dc1269fb-938b-4634-a683-9b0375e01915","Type":"ContainerDied","Data":"1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f"} Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.596617 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.596611 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66691740-aef1-4155-b12a-a7ce7f9c5f93","Type":"ContainerDied","Data":"0f04be63e020e0810f97362a5d82353bc12dce26603d979e979fa0362853bfa4"} Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.596744 4930 scope.go:117] "RemoveContainer" containerID="9c58db8a44be4232eb42974f9aec757b82505d784396c5ba789585e8033f9acc" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.625798 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.663080 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.675718 4930 scope.go:117] "RemoveContainer" containerID="4b64a0fa45a98c56ded17ef620e0b17f14cb0c8ee949d9f5b426eccf1239ad31" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.685182 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:17:40 crc kubenswrapper[4930]: E1124 12:17:40.685912 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66691740-aef1-4155-b12a-a7ce7f9c5f93" containerName="probe" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.685929 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="66691740-aef1-4155-b12a-a7ce7f9c5f93" containerName="probe" Nov 24 12:17:40 crc kubenswrapper[4930]: E1124 12:17:40.685954 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66691740-aef1-4155-b12a-a7ce7f9c5f93" containerName="cinder-scheduler" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.685985 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="66691740-aef1-4155-b12a-a7ce7f9c5f93" containerName="cinder-scheduler" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.686316 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="66691740-aef1-4155-b12a-a7ce7f9c5f93" containerName="probe" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.686339 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="66691740-aef1-4155-b12a-a7ce7f9c5f93" containerName="cinder-scheduler" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.688246 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.691386 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.726627 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.880168 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.880259 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.880298 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.880324 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.880819 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94m44\" (UniqueName: \"kubernetes.io/projected/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-kube-api-access-94m44\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.880976 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.982393 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94m44\" (UniqueName: \"kubernetes.io/projected/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-kube-api-access-94m44\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.982459 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.982509 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.982583 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.982622 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.982649 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.982723 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.989345 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.989404 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:40 crc kubenswrapper[4930]: I1124 12:17:40.989574 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:41 crc kubenswrapper[4930]: I1124 12:17:40.999989 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:41 crc kubenswrapper[4930]: I1124 12:17:41.002250 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94m44\" (UniqueName: \"kubernetes.io/projected/f4f3ac20-aa87-48a4-9980-08b8ca2053ef-kube-api-access-94m44\") pod \"cinder-scheduler-0\" (UID: \"f4f3ac20-aa87-48a4-9980-08b8ca2053ef\") " pod="openstack/cinder-scheduler-0" Nov 24 12:17:41 crc kubenswrapper[4930]: I1124 12:17:41.031484 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:17:41 crc kubenswrapper[4930]: I1124 12:17:41.549715 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:17:41 crc kubenswrapper[4930]: W1124 12:17:41.555348 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4f3ac20_aa87_48a4_9980_08b8ca2053ef.slice/crio-4521ff47be8044dff8a73303133c9f452292818d682b3ea27dad854809f7fc1c WatchSource:0}: Error finding container 4521ff47be8044dff8a73303133c9f452292818d682b3ea27dad854809f7fc1c: Status 404 returned error can't find the container with id 4521ff47be8044dff8a73303133c9f452292818d682b3ea27dad854809f7fc1c Nov 24 12:17:41 crc kubenswrapper[4930]: I1124 12:17:41.614520 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4f3ac20-aa87-48a4-9980-08b8ca2053ef","Type":"ContainerStarted","Data":"4521ff47be8044dff8a73303133c9f452292818d682b3ea27dad854809f7fc1c"} Nov 24 12:17:41 crc kubenswrapper[4930]: I1124 12:17:41.803964 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69b96dd4dd-2xcvn" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 24 12:17:42 crc kubenswrapper[4930]: I1124 12:17:42.094919 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66691740-aef1-4155-b12a-a7ce7f9c5f93" path="/var/lib/kubelet/pods/66691740-aef1-4155-b12a-a7ce7f9c5f93/volumes" Nov 24 12:17:42 crc kubenswrapper[4930]: I1124 12:17:42.276788 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 12:17:42 crc kubenswrapper[4930]: I1124 12:17:42.629890 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4f3ac20-aa87-48a4-9980-08b8ca2053ef","Type":"ContainerStarted","Data":"8f313dc5ad4b24ac8466800220384a123c5582831def5f1b8287b26eecc37560"} Nov 24 12:17:43 crc kubenswrapper[4930]: I1124 12:17:43.652304 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4f3ac20-aa87-48a4-9980-08b8ca2053ef","Type":"ContainerStarted","Data":"f72c508474e9a84bca2e87635d27ca95bcb90ee3afc2cfdd5d5e447bfdf3e475"} Nov 24 12:17:43 crc kubenswrapper[4930]: I1124 12:17:43.675134 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.6751143600000002 podStartE2EDuration="3.67511436s" podCreationTimestamp="2025-11-24 12:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:43.671014042 +0000 UTC m=+1110.285342032" watchObservedRunningTime="2025-11-24 12:17:43.67511436 +0000 UTC m=+1110.289442310" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.123370 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6f4c64f46c-fdhkr"] Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.125107 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f4c64f46c-fdhkr"] Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.125196 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.135343 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.135799 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.136885 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.238200 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7544a665-a649-46c1-b2e2-4f0179645890-etc-swift\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.238296 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-combined-ca-bundle\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.238356 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-config-data\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.238374 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7544a665-a649-46c1-b2e2-4f0179645890-run-httpd\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.239301 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-public-tls-certs\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.239335 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-internal-tls-certs\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.239365 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7544a665-a649-46c1-b2e2-4f0179645890-log-httpd\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.239468 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk8mm\" (UniqueName: \"kubernetes.io/projected/7544a665-a649-46c1-b2e2-4f0179645890-kube-api-access-sk8mm\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.344069 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk8mm\" (UniqueName: \"kubernetes.io/projected/7544a665-a649-46c1-b2e2-4f0179645890-kube-api-access-sk8mm\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.344165 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7544a665-a649-46c1-b2e2-4f0179645890-etc-swift\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.344196 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-combined-ca-bundle\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.344235 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-config-data\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.344254 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7544a665-a649-46c1-b2e2-4f0179645890-run-httpd\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.344328 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-public-tls-certs\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.344350 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-internal-tls-certs\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.344383 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7544a665-a649-46c1-b2e2-4f0179645890-log-httpd\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.348078 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7544a665-a649-46c1-b2e2-4f0179645890-run-httpd\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.351155 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7544a665-a649-46c1-b2e2-4f0179645890-log-httpd\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.355751 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7544a665-a649-46c1-b2e2-4f0179645890-etc-swift\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.356215 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-config-data\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.360018 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-combined-ca-bundle\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.360594 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-internal-tls-certs\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.364823 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7544a665-a649-46c1-b2e2-4f0179645890-public-tls-certs\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.366154 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk8mm\" (UniqueName: \"kubernetes.io/projected/7544a665-a649-46c1-b2e2-4f0179645890-kube-api-access-sk8mm\") pod \"swift-proxy-6f4c64f46c-fdhkr\" (UID: \"7544a665-a649-46c1-b2e2-4f0179645890\") " pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.452125 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.503607 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.506005 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="sg-core" containerID="cri-o://9847c9f527a51178305a432002f3434205d3ea85a96c9f48a7dae6a153e33dce" gracePeriod=30 Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.506056 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="ceilometer-notification-agent" containerID="cri-o://df357430ae01583440a5d4af5d671f0d09faa0abc3530b47c988b08551167fb3" gracePeriod=30 Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.506169 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="proxy-httpd" containerID="cri-o://7ef3afbdb73787688fcec56267c1026d618458646ee942e22bcffe9706e17770" gracePeriod=30 Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.505975 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="ceilometer-central-agent" containerID="cri-o://a48a0c4801e1d32921412b06fa07f05d5a6a7e40f4661cf2a674d237b9cbfa50" gracePeriod=30 Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.515373 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": EOF" Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.692265 4930 generic.go:334] "Generic (PLEG): container finished" podID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerID="9847c9f527a51178305a432002f3434205d3ea85a96c9f48a7dae6a153e33dce" exitCode=2 Nov 24 12:17:44 crc kubenswrapper[4930]: I1124 12:17:44.692519 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d53100f9-6ba2-48da-9836-f05692e91a3b","Type":"ContainerDied","Data":"9847c9f527a51178305a432002f3434205d3ea85a96c9f48a7dae6a153e33dce"} Nov 24 12:17:45 crc kubenswrapper[4930]: I1124 12:17:45.102637 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f4c64f46c-fdhkr"] Nov 24 12:17:45 crc kubenswrapper[4930]: W1124 12:17:45.106220 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7544a665_a649_46c1_b2e2_4f0179645890.slice/crio-41c12c4c251c0c9dcb626f2651d9cbf8e1dbc20de0f4360bc624ad701c319e2d WatchSource:0}: Error finding container 41c12c4c251c0c9dcb626f2651d9cbf8e1dbc20de0f4360bc624ad701c319e2d: Status 404 returned error can't find the container with id 41c12c4c251c0c9dcb626f2651d9cbf8e1dbc20de0f4360bc624ad701c319e2d Nov 24 12:17:45 crc kubenswrapper[4930]: I1124 12:17:45.707171 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" event={"ID":"7544a665-a649-46c1-b2e2-4f0179645890","Type":"ContainerStarted","Data":"3e001d7ddc32b7ff8b2cf5be720f7acb1196073a6fb37ca1024ddfc1bb00d61d"} Nov 24 12:17:45 crc kubenswrapper[4930]: I1124 12:17:45.707501 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" event={"ID":"7544a665-a649-46c1-b2e2-4f0179645890","Type":"ContainerStarted","Data":"41c12c4c251c0c9dcb626f2651d9cbf8e1dbc20de0f4360bc624ad701c319e2d"} Nov 24 12:17:45 crc kubenswrapper[4930]: I1124 12:17:45.715909 4930 generic.go:334] "Generic (PLEG): container finished" podID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerID="7ef3afbdb73787688fcec56267c1026d618458646ee942e22bcffe9706e17770" exitCode=0 Nov 24 12:17:45 crc kubenswrapper[4930]: I1124 12:17:45.716113 4930 generic.go:334] "Generic (PLEG): container finished" podID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerID="a48a0c4801e1d32921412b06fa07f05d5a6a7e40f4661cf2a674d237b9cbfa50" exitCode=0 Nov 24 12:17:45 crc kubenswrapper[4930]: I1124 12:17:45.716010 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d53100f9-6ba2-48da-9836-f05692e91a3b","Type":"ContainerDied","Data":"7ef3afbdb73787688fcec56267c1026d618458646ee942e22bcffe9706e17770"} Nov 24 12:17:45 crc kubenswrapper[4930]: I1124 12:17:45.716270 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d53100f9-6ba2-48da-9836-f05692e91a3b","Type":"ContainerDied","Data":"a48a0c4801e1d32921412b06fa07f05d5a6a7e40f4661cf2a674d237b9cbfa50"} Nov 24 12:17:46 crc kubenswrapper[4930]: I1124 12:17:46.031628 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 12:17:48 crc kubenswrapper[4930]: I1124 12:17:48.748843 4930 generic.go:334] "Generic (PLEG): container finished" podID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerID="df357430ae01583440a5d4af5d671f0d09faa0abc3530b47c988b08551167fb3" exitCode=0 Nov 24 12:17:48 crc kubenswrapper[4930]: I1124 12:17:48.749037 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d53100f9-6ba2-48da-9836-f05692e91a3b","Type":"ContainerDied","Data":"df357430ae01583440a5d4af5d671f0d09faa0abc3530b47c988b08551167fb3"} Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.085611 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.175154 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-config-data\") pod \"d53100f9-6ba2-48da-9836-f05692e91a3b\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.175293 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-run-httpd\") pod \"d53100f9-6ba2-48da-9836-f05692e91a3b\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.175352 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-sg-core-conf-yaml\") pod \"d53100f9-6ba2-48da-9836-f05692e91a3b\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.175421 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-log-httpd\") pod \"d53100f9-6ba2-48da-9836-f05692e91a3b\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.175442 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-combined-ca-bundle\") pod \"d53100f9-6ba2-48da-9836-f05692e91a3b\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.175468 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsg4p\" (UniqueName: \"kubernetes.io/projected/d53100f9-6ba2-48da-9836-f05692e91a3b-kube-api-access-jsg4p\") pod \"d53100f9-6ba2-48da-9836-f05692e91a3b\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.175567 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-scripts\") pod \"d53100f9-6ba2-48da-9836-f05692e91a3b\" (UID: \"d53100f9-6ba2-48da-9836-f05692e91a3b\") " Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.176377 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d53100f9-6ba2-48da-9836-f05692e91a3b" (UID: "d53100f9-6ba2-48da-9836-f05692e91a3b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.176355 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d53100f9-6ba2-48da-9836-f05692e91a3b" (UID: "d53100f9-6ba2-48da-9836-f05692e91a3b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.180996 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-scripts" (OuterVolumeSpecName: "scripts") pod "d53100f9-6ba2-48da-9836-f05692e91a3b" (UID: "d53100f9-6ba2-48da-9836-f05692e91a3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.181324 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d53100f9-6ba2-48da-9836-f05692e91a3b-kube-api-access-jsg4p" (OuterVolumeSpecName: "kube-api-access-jsg4p") pod "d53100f9-6ba2-48da-9836-f05692e91a3b" (UID: "d53100f9-6ba2-48da-9836-f05692e91a3b"). InnerVolumeSpecName "kube-api-access-jsg4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.219021 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d53100f9-6ba2-48da-9836-f05692e91a3b" (UID: "d53100f9-6ba2-48da-9836-f05692e91a3b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.261373 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d53100f9-6ba2-48da-9836-f05692e91a3b" (UID: "d53100f9-6ba2-48da-9836-f05692e91a3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.277662 4930 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.277706 4930 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.277720 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.277732 4930 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d53100f9-6ba2-48da-9836-f05692e91a3b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.277743 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsg4p\" (UniqueName: \"kubernetes.io/projected/d53100f9-6ba2-48da-9836-f05692e91a3b-kube-api-access-jsg4p\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.277754 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.288879 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-config-data" (OuterVolumeSpecName: "config-data") pod "d53100f9-6ba2-48da-9836-f05692e91a3b" (UID: "d53100f9-6ba2-48da-9836-f05692e91a3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.354923 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.382941 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53100f9-6ba2-48da-9836-f05692e91a3b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.784382 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" event={"ID":"7544a665-a649-46c1-b2e2-4f0179645890","Type":"ContainerStarted","Data":"c0f1db007e749a5ed3e939b43338b57a8382fc0838ddab4b75eef85d2a73521b"} Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.784460 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.784949 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.788036 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d53100f9-6ba2-48da-9836-f05692e91a3b","Type":"ContainerDied","Data":"55c0e205a3abcaf28e775c1e8003b709ab7d35ceb15bd741c62f3b142bb309f7"} Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.788090 4930 scope.go:117] "RemoveContainer" containerID="7ef3afbdb73787688fcec56267c1026d618458646ee942e22bcffe9706e17770" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.788239 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.795025 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1416edd0-b4e2-4acb-a449-1e9d40e9b2f5","Type":"ContainerStarted","Data":"a2bb837c3848253e4080ccd1d3e079080e15700fe43dbb7e864f056a19a224b5"} Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.803545 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69b96dd4dd-2xcvn" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.803626 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" podUID="7544a665-a649-46c1-b2e2-4f0179645890" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.813776 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" podStartSLOduration=7.813733899 podStartE2EDuration="7.813733899s" podCreationTimestamp="2025-11-24 12:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:51.809726244 +0000 UTC m=+1118.424054214" watchObservedRunningTime="2025-11-24 12:17:51.813733899 +0000 UTC m=+1118.428061849" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.823084 4930 scope.go:117] "RemoveContainer" containerID="9847c9f527a51178305a432002f3434205d3ea85a96c9f48a7dae6a153e33dce" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.838806 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.429484693 podStartE2EDuration="13.838787369s" podCreationTimestamp="2025-11-24 12:17:38 +0000 UTC" firstStartedPulling="2025-11-24 12:17:39.428970679 +0000 UTC m=+1106.043298629" lastFinishedPulling="2025-11-24 12:17:50.838273355 +0000 UTC m=+1117.452601305" observedRunningTime="2025-11-24 12:17:51.827220827 +0000 UTC m=+1118.441548787" watchObservedRunningTime="2025-11-24 12:17:51.838787369 +0000 UTC m=+1118.453115329" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.846336 4930 scope.go:117] "RemoveContainer" containerID="df357430ae01583440a5d4af5d671f0d09faa0abc3530b47c988b08551167fb3" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.867339 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.871796 4930 scope.go:117] "RemoveContainer" containerID="a48a0c4801e1d32921412b06fa07f05d5a6a7e40f4661cf2a674d237b9cbfa50" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.879629 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.892218 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:51 crc kubenswrapper[4930]: E1124 12:17:51.892788 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="ceilometer-central-agent" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.892813 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="ceilometer-central-agent" Nov 24 12:17:51 crc kubenswrapper[4930]: E1124 12:17:51.892831 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="sg-core" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.892839 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="sg-core" Nov 24 12:17:51 crc kubenswrapper[4930]: E1124 12:17:51.892855 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="ceilometer-notification-agent" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.892863 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="ceilometer-notification-agent" Nov 24 12:17:51 crc kubenswrapper[4930]: E1124 12:17:51.892882 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="proxy-httpd" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.892888 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="proxy-httpd" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.893091 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="ceilometer-notification-agent" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.893111 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="ceilometer-central-agent" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.893125 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="proxy-httpd" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.893137 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" containerName="sg-core" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.895150 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.899221 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.899526 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:17:51 crc kubenswrapper[4930]: I1124 12:17:51.927115 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.005046 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjt9k\" (UniqueName: \"kubernetes.io/projected/7dc67723-020c-471b-9834-f0dda7578d11-kube-api-access-fjt9k\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.005106 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.005240 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-log-httpd\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.005277 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-run-httpd\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.005466 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-scripts\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.005557 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-config-data\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.005771 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.094895 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d53100f9-6ba2-48da-9836-f05692e91a3b" path="/var/lib/kubelet/pods/d53100f9-6ba2-48da-9836-f05692e91a3b/volumes" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.107851 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.107916 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjt9k\" (UniqueName: \"kubernetes.io/projected/7dc67723-020c-471b-9834-f0dda7578d11-kube-api-access-fjt9k\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.107940 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.107998 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-log-httpd\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.108024 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-run-httpd\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.108051 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-scripts\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.108066 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-config-data\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.109098 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-log-httpd\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.109475 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-run-httpd\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.112717 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-scripts\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.113042 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.113326 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.114270 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-config-data\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.132174 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjt9k\" (UniqueName: \"kubernetes.io/projected/7dc67723-020c-471b-9834-f0dda7578d11-kube-api-access-fjt9k\") pod \"ceilometer-0\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.227768 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.810494 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:52 crc kubenswrapper[4930]: W1124 12:17:52.906195 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dc67723_020c_471b_9834_f0dda7578d11.slice/crio-3a1d0a00bb4701dd963b3f10dcbae7579b25bc63a9aaae1a96faae47e74dda79 WatchSource:0}: Error finding container 3a1d0a00bb4701dd963b3f10dcbae7579b25bc63a9aaae1a96faae47e74dda79: Status 404 returned error can't find the container with id 3a1d0a00bb4701dd963b3f10dcbae7579b25bc63a9aaae1a96faae47e74dda79 Nov 24 12:17:52 crc kubenswrapper[4930]: I1124 12:17:52.917303 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:17:53 crc kubenswrapper[4930]: I1124 12:17:53.814770 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dc67723-020c-471b-9834-f0dda7578d11","Type":"ContainerStarted","Data":"3a1d0a00bb4701dd963b3f10dcbae7579b25bc63a9aaae1a96faae47e74dda79"} Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.036593 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-d47hr"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.038115 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d47hr" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.051377 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-d47hr"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.132607 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vx4m5"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.133889 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vx4m5" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.145648 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vx4m5"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.155600 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a37265e-62d9-4ebc-9793-aed961e89590-operator-scripts\") pod \"nova-api-db-create-d47hr\" (UID: \"8a37265e-62d9-4ebc-9793-aed961e89590\") " pod="openstack/nova-api-db-create-d47hr" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.155705 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8z6s\" (UniqueName: \"kubernetes.io/projected/8a37265e-62d9-4ebc-9793-aed961e89590-kube-api-access-n8z6s\") pod \"nova-api-db-create-d47hr\" (UID: \"8a37265e-62d9-4ebc-9793-aed961e89590\") " pod="openstack/nova-api-db-create-d47hr" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.158204 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-16c3-account-create-589jb"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.159536 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-16c3-account-create-589jb" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.164418 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.167829 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-16c3-account-create-589jb"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.258338 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8jf\" (UniqueName: \"kubernetes.io/projected/2a3ef300-b344-4fba-a285-f85430bccd47-kube-api-access-9q8jf\") pod \"nova-api-16c3-account-create-589jb\" (UID: \"2a3ef300-b344-4fba-a285-f85430bccd47\") " pod="openstack/nova-api-16c3-account-create-589jb" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.258763 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a37265e-62d9-4ebc-9793-aed961e89590-operator-scripts\") pod \"nova-api-db-create-d47hr\" (UID: \"8a37265e-62d9-4ebc-9793-aed961e89590\") " pod="openstack/nova-api-db-create-d47hr" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.259927 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9f6t\" (UniqueName: \"kubernetes.io/projected/25d0ae65-ed30-465d-a12a-65394f309c5a-kube-api-access-x9f6t\") pod \"nova-cell0-db-create-vx4m5\" (UID: \"25d0ae65-ed30-465d-a12a-65394f309c5a\") " pod="openstack/nova-cell0-db-create-vx4m5" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.260013 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a37265e-62d9-4ebc-9793-aed961e89590-operator-scripts\") pod \"nova-api-db-create-d47hr\" (UID: \"8a37265e-62d9-4ebc-9793-aed961e89590\") " pod="openstack/nova-api-db-create-d47hr" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.260221 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8z6s\" (UniqueName: \"kubernetes.io/projected/8a37265e-62d9-4ebc-9793-aed961e89590-kube-api-access-n8z6s\") pod \"nova-api-db-create-d47hr\" (UID: \"8a37265e-62d9-4ebc-9793-aed961e89590\") " pod="openstack/nova-api-db-create-d47hr" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.260266 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a3ef300-b344-4fba-a285-f85430bccd47-operator-scripts\") pod \"nova-api-16c3-account-create-589jb\" (UID: \"2a3ef300-b344-4fba-a285-f85430bccd47\") " pod="openstack/nova-api-16c3-account-create-589jb" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.260392 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d0ae65-ed30-465d-a12a-65394f309c5a-operator-scripts\") pod \"nova-cell0-db-create-vx4m5\" (UID: \"25d0ae65-ed30-465d-a12a-65394f309c5a\") " pod="openstack/nova-cell0-db-create-vx4m5" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.310586 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8z6s\" (UniqueName: \"kubernetes.io/projected/8a37265e-62d9-4ebc-9793-aed961e89590-kube-api-access-n8z6s\") pod \"nova-api-db-create-d47hr\" (UID: \"8a37265e-62d9-4ebc-9793-aed961e89590\") " pod="openstack/nova-api-db-create-d47hr" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.340453 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4hgkg"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.342118 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4hgkg" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.353106 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8a17-account-create-6h9tj"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.354633 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a17-account-create-6h9tj" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.360401 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.367636 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8jf\" (UniqueName: \"kubernetes.io/projected/2a3ef300-b344-4fba-a285-f85430bccd47-kube-api-access-9q8jf\") pod \"nova-api-16c3-account-create-589jb\" (UID: \"2a3ef300-b344-4fba-a285-f85430bccd47\") " pod="openstack/nova-api-16c3-account-create-589jb" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.367775 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9f6t\" (UniqueName: \"kubernetes.io/projected/25d0ae65-ed30-465d-a12a-65394f309c5a-kube-api-access-x9f6t\") pod \"nova-cell0-db-create-vx4m5\" (UID: \"25d0ae65-ed30-465d-a12a-65394f309c5a\") " pod="openstack/nova-cell0-db-create-vx4m5" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.367926 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a3ef300-b344-4fba-a285-f85430bccd47-operator-scripts\") pod \"nova-api-16c3-account-create-589jb\" (UID: \"2a3ef300-b344-4fba-a285-f85430bccd47\") " pod="openstack/nova-api-16c3-account-create-589jb" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.368496 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d47hr" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.368517 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4hgkg"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.368576 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d0ae65-ed30-465d-a12a-65394f309c5a-operator-scripts\") pod \"nova-cell0-db-create-vx4m5\" (UID: \"25d0ae65-ed30-465d-a12a-65394f309c5a\") " pod="openstack/nova-cell0-db-create-vx4m5" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.394821 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8a17-account-create-6h9tj"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.395856 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d0ae65-ed30-465d-a12a-65394f309c5a-operator-scripts\") pod \"nova-cell0-db-create-vx4m5\" (UID: \"25d0ae65-ed30-465d-a12a-65394f309c5a\") " pod="openstack/nova-cell0-db-create-vx4m5" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.396013 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a3ef300-b344-4fba-a285-f85430bccd47-operator-scripts\") pod \"nova-api-16c3-account-create-589jb\" (UID: \"2a3ef300-b344-4fba-a285-f85430bccd47\") " pod="openstack/nova-api-16c3-account-create-589jb" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.399088 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8jf\" (UniqueName: \"kubernetes.io/projected/2a3ef300-b344-4fba-a285-f85430bccd47-kube-api-access-9q8jf\") pod \"nova-api-16c3-account-create-589jb\" (UID: \"2a3ef300-b344-4fba-a285-f85430bccd47\") " pod="openstack/nova-api-16c3-account-create-589jb" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.408020 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9f6t\" (UniqueName: \"kubernetes.io/projected/25d0ae65-ed30-465d-a12a-65394f309c5a-kube-api-access-x9f6t\") pod \"nova-cell0-db-create-vx4m5\" (UID: \"25d0ae65-ed30-465d-a12a-65394f309c5a\") " pod="openstack/nova-cell0-db-create-vx4m5" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.462018 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vx4m5" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.471676 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kvb7\" (UniqueName: \"kubernetes.io/projected/a9988481-8889-4845-a558-9a3fa4f14322-kube-api-access-6kvb7\") pod \"nova-cell1-db-create-4hgkg\" (UID: \"a9988481-8889-4845-a558-9a3fa4f14322\") " pod="openstack/nova-cell1-db-create-4hgkg" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.471730 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85937f48-37f8-4673-ad68-91c8b5f10a8e-operator-scripts\") pod \"nova-cell0-8a17-account-create-6h9tj\" (UID: \"85937f48-37f8-4673-ad68-91c8b5f10a8e\") " pod="openstack/nova-cell0-8a17-account-create-6h9tj" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.471861 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9988481-8889-4845-a558-9a3fa4f14322-operator-scripts\") pod \"nova-cell1-db-create-4hgkg\" (UID: \"a9988481-8889-4845-a558-9a3fa4f14322\") " pod="openstack/nova-cell1-db-create-4hgkg" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.471903 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk9mh\" (UniqueName: \"kubernetes.io/projected/85937f48-37f8-4673-ad68-91c8b5f10a8e-kube-api-access-rk9mh\") pod \"nova-cell0-8a17-account-create-6h9tj\" (UID: \"85937f48-37f8-4673-ad68-91c8b5f10a8e\") " pod="openstack/nova-cell0-8a17-account-create-6h9tj" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.486188 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-16c3-account-create-589jb" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.566214 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5302-account-create-kthkc"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.569506 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5302-account-create-kthkc" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.573767 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.573843 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kvb7\" (UniqueName: \"kubernetes.io/projected/a9988481-8889-4845-a558-9a3fa4f14322-kube-api-access-6kvb7\") pod \"nova-cell1-db-create-4hgkg\" (UID: \"a9988481-8889-4845-a558-9a3fa4f14322\") " pod="openstack/nova-cell1-db-create-4hgkg" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.573888 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85937f48-37f8-4673-ad68-91c8b5f10a8e-operator-scripts\") pod \"nova-cell0-8a17-account-create-6h9tj\" (UID: \"85937f48-37f8-4673-ad68-91c8b5f10a8e\") " pod="openstack/nova-cell0-8a17-account-create-6h9tj" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.574008 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9988481-8889-4845-a558-9a3fa4f14322-operator-scripts\") pod \"nova-cell1-db-create-4hgkg\" (UID: \"a9988481-8889-4845-a558-9a3fa4f14322\") " pod="openstack/nova-cell1-db-create-4hgkg" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.574048 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk9mh\" (UniqueName: \"kubernetes.io/projected/85937f48-37f8-4673-ad68-91c8b5f10a8e-kube-api-access-rk9mh\") pod \"nova-cell0-8a17-account-create-6h9tj\" (UID: \"85937f48-37f8-4673-ad68-91c8b5f10a8e\") " pod="openstack/nova-cell0-8a17-account-create-6h9tj" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.574801 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9988481-8889-4845-a558-9a3fa4f14322-operator-scripts\") pod \"nova-cell1-db-create-4hgkg\" (UID: \"a9988481-8889-4845-a558-9a3fa4f14322\") " pod="openstack/nova-cell1-db-create-4hgkg" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.578333 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85937f48-37f8-4673-ad68-91c8b5f10a8e-operator-scripts\") pod \"nova-cell0-8a17-account-create-6h9tj\" (UID: \"85937f48-37f8-4673-ad68-91c8b5f10a8e\") " pod="openstack/nova-cell0-8a17-account-create-6h9tj" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.593083 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kvb7\" (UniqueName: \"kubernetes.io/projected/a9988481-8889-4845-a558-9a3fa4f14322-kube-api-access-6kvb7\") pod \"nova-cell1-db-create-4hgkg\" (UID: \"a9988481-8889-4845-a558-9a3fa4f14322\") " pod="openstack/nova-cell1-db-create-4hgkg" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.598050 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5302-account-create-kthkc"] Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.605082 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk9mh\" (UniqueName: \"kubernetes.io/projected/85937f48-37f8-4673-ad68-91c8b5f10a8e-kube-api-access-rk9mh\") pod \"nova-cell0-8a17-account-create-6h9tj\" (UID: \"85937f48-37f8-4673-ad68-91c8b5f10a8e\") " pod="openstack/nova-cell0-8a17-account-create-6h9tj" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.677826 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86pwh\" (UniqueName: \"kubernetes.io/projected/054173d4-2a3d-45a3-bb82-de2c7afc4316-kube-api-access-86pwh\") pod \"nova-cell1-5302-account-create-kthkc\" (UID: \"054173d4-2a3d-45a3-bb82-de2c7afc4316\") " pod="openstack/nova-cell1-5302-account-create-kthkc" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.677946 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054173d4-2a3d-45a3-bb82-de2c7afc4316-operator-scripts\") pod \"nova-cell1-5302-account-create-kthkc\" (UID: \"054173d4-2a3d-45a3-bb82-de2c7afc4316\") " pod="openstack/nova-cell1-5302-account-create-kthkc" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.780801 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86pwh\" (UniqueName: \"kubernetes.io/projected/054173d4-2a3d-45a3-bb82-de2c7afc4316-kube-api-access-86pwh\") pod \"nova-cell1-5302-account-create-kthkc\" (UID: \"054173d4-2a3d-45a3-bb82-de2c7afc4316\") " pod="openstack/nova-cell1-5302-account-create-kthkc" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.781448 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054173d4-2a3d-45a3-bb82-de2c7afc4316-operator-scripts\") pod \"nova-cell1-5302-account-create-kthkc\" (UID: \"054173d4-2a3d-45a3-bb82-de2c7afc4316\") " pod="openstack/nova-cell1-5302-account-create-kthkc" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.782415 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054173d4-2a3d-45a3-bb82-de2c7afc4316-operator-scripts\") pod \"nova-cell1-5302-account-create-kthkc\" (UID: \"054173d4-2a3d-45a3-bb82-de2c7afc4316\") " pod="openstack/nova-cell1-5302-account-create-kthkc" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.808873 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86pwh\" (UniqueName: \"kubernetes.io/projected/054173d4-2a3d-45a3-bb82-de2c7afc4316-kube-api-access-86pwh\") pod \"nova-cell1-5302-account-create-kthkc\" (UID: \"054173d4-2a3d-45a3-bb82-de2c7afc4316\") " pod="openstack/nova-cell1-5302-account-create-kthkc" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.836769 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dc67723-020c-471b-9834-f0dda7578d11","Type":"ContainerStarted","Data":"fe3a4026dd42670d8f8e906ada1cf5bc1f6a92c95049d13a007d1df8678f4482"} Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.836822 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dc67723-020c-471b-9834-f0dda7578d11","Type":"ContainerStarted","Data":"8754c0e5b964ffec47f7684370a09a3cae1453e14c4f3ad1a4f1222dc4820d85"} Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.860148 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4hgkg" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.869941 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a17-account-create-6h9tj" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.973813 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5302-account-create-kthkc" Nov 24 12:17:54 crc kubenswrapper[4930]: I1124 12:17:54.994357 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-d47hr"] Nov 24 12:17:55 crc kubenswrapper[4930]: W1124 12:17:55.110369 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25d0ae65_ed30_465d_a12a_65394f309c5a.slice/crio-c8e8784eaf5ef2ca1477e01a924d3935dc826f43212a46d72fceabcdc1f9b218 WatchSource:0}: Error finding container c8e8784eaf5ef2ca1477e01a924d3935dc826f43212a46d72fceabcdc1f9b218: Status 404 returned error can't find the container with id c8e8784eaf5ef2ca1477e01a924d3935dc826f43212a46d72fceabcdc1f9b218 Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.131987 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vx4m5"] Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.252410 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-16c3-account-create-589jb"] Nov 24 12:17:55 crc kubenswrapper[4930]: W1124 12:17:55.273223 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a3ef300_b344_4fba_a285_f85430bccd47.slice/crio-f861d0f6dd66fcc86282c464d80c5dc38bd9fce7d0f65eabcc7e687a822949f3 WatchSource:0}: Error finding container f861d0f6dd66fcc86282c464d80c5dc38bd9fce7d0f65eabcc7e687a822949f3: Status 404 returned error can't find the container with id f861d0f6dd66fcc86282c464d80c5dc38bd9fce7d0f65eabcc7e687a822949f3 Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.472835 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4hgkg"] Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.563505 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8a17-account-create-6h9tj"] Nov 24 12:17:55 crc kubenswrapper[4930]: W1124 12:17:55.573831 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85937f48_37f8_4673_ad68_91c8b5f10a8e.slice/crio-d012bb244b58df7f62fb003f26a6f2419bd7079d36485561a3d8eead34d79885 WatchSource:0}: Error finding container d012bb244b58df7f62fb003f26a6f2419bd7079d36485561a3d8eead34d79885: Status 404 returned error can't find the container with id d012bb244b58df7f62fb003f26a6f2419bd7079d36485561a3d8eead34d79885 Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.695281 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5302-account-create-kthkc"] Nov 24 12:17:55 crc kubenswrapper[4930]: W1124 12:17:55.748298 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod054173d4_2a3d_45a3_bb82_de2c7afc4316.slice/crio-bec1b55d29c76d4556639ba6d577917c8b0f31e07f062591e4ee7cc5e148f34f WatchSource:0}: Error finding container bec1b55d29c76d4556639ba6d577917c8b0f31e07f062591e4ee7cc5e148f34f: Status 404 returned error can't find the container with id bec1b55d29c76d4556639ba6d577917c8b0f31e07f062591e4ee7cc5e148f34f Nov 24 12:17:55 crc kubenswrapper[4930]: E1124 12:17:55.849511 4930 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a37265e_62d9_4ebc_9793_aed961e89590.slice/crio-60132ca2ded6560fabec0c323b4034be0cb1dd5e85c1ca4a161d0aaf80a07014.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a37265e_62d9_4ebc_9793_aed961e89590.slice/crio-conmon-60132ca2ded6560fabec0c323b4034be0cb1dd5e85c1ca4a161d0aaf80a07014.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.853730 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vx4m5" event={"ID":"25d0ae65-ed30-465d-a12a-65394f309c5a","Type":"ContainerStarted","Data":"fcccdb2f5d804cacd5cddd4207cdba8a67cbe037981a7270ef51b3186efd8502"} Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.853771 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vx4m5" event={"ID":"25d0ae65-ed30-465d-a12a-65394f309c5a","Type":"ContainerStarted","Data":"c8e8784eaf5ef2ca1477e01a924d3935dc826f43212a46d72fceabcdc1f9b218"} Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.864379 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dc67723-020c-471b-9834-f0dda7578d11","Type":"ContainerStarted","Data":"4f715907858505b36e1af565cf0eb7665b0eeb07799776ce497bb6814d5c9948"} Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.868262 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-16c3-account-create-589jb" event={"ID":"2a3ef300-b344-4fba-a285-f85430bccd47","Type":"ContainerStarted","Data":"14561a44121a426c095fdbca63b4658747afaef6e840a09db2b64e0512cc96ce"} Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.868303 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-16c3-account-create-589jb" event={"ID":"2a3ef300-b344-4fba-a285-f85430bccd47","Type":"ContainerStarted","Data":"f861d0f6dd66fcc86282c464d80c5dc38bd9fce7d0f65eabcc7e687a822949f3"} Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.878266 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-vx4m5" podStartSLOduration=1.878247496 podStartE2EDuration="1.878247496s" podCreationTimestamp="2025-11-24 12:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:55.871137031 +0000 UTC m=+1122.485464981" watchObservedRunningTime="2025-11-24 12:17:55.878247496 +0000 UTC m=+1122.492575446" Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.882360 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a17-account-create-6h9tj" event={"ID":"85937f48-37f8-4673-ad68-91c8b5f10a8e","Type":"ContainerStarted","Data":"d012bb244b58df7f62fb003f26a6f2419bd7079d36485561a3d8eead34d79885"} Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.892458 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-16c3-account-create-589jb" podStartSLOduration=1.892441974 podStartE2EDuration="1.892441974s" podCreationTimestamp="2025-11-24 12:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:55.889983963 +0000 UTC m=+1122.504311913" watchObservedRunningTime="2025-11-24 12:17:55.892441974 +0000 UTC m=+1122.506769924" Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.892861 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5302-account-create-kthkc" event={"ID":"054173d4-2a3d-45a3-bb82-de2c7afc4316","Type":"ContainerStarted","Data":"bec1b55d29c76d4556639ba6d577917c8b0f31e07f062591e4ee7cc5e148f34f"} Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.904402 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4hgkg" event={"ID":"a9988481-8889-4845-a558-9a3fa4f14322","Type":"ContainerStarted","Data":"75efba0be8c0c8e33bed9aa6315e8d73ac74b3e8070744b42abb2e750eaca706"} Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.908932 4930 generic.go:334] "Generic (PLEG): container finished" podID="8a37265e-62d9-4ebc-9793-aed961e89590" containerID="60132ca2ded6560fabec0c323b4034be0cb1dd5e85c1ca4a161d0aaf80a07014" exitCode=0 Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.909000 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d47hr" event={"ID":"8a37265e-62d9-4ebc-9793-aed961e89590","Type":"ContainerDied","Data":"60132ca2ded6560fabec0c323b4034be0cb1dd5e85c1ca4a161d0aaf80a07014"} Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.909035 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d47hr" event={"ID":"8a37265e-62d9-4ebc-9793-aed961e89590","Type":"ContainerStarted","Data":"1607540e9b6b6024dbcc33c83039ad2ca2f16e63d5accc5d86bda6e01681bc89"} Nov 24 12:17:55 crc kubenswrapper[4930]: I1124 12:17:55.920005 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-4hgkg" podStartSLOduration=1.919987086 podStartE2EDuration="1.919987086s" podCreationTimestamp="2025-11-24 12:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:17:55.918611027 +0000 UTC m=+1122.532938977" watchObservedRunningTime="2025-11-24 12:17:55.919987086 +0000 UTC m=+1122.534315036" Nov 24 12:17:56 crc kubenswrapper[4930]: I1124 12:17:56.923600 4930 generic.go:334] "Generic (PLEG): container finished" podID="85937f48-37f8-4673-ad68-91c8b5f10a8e" containerID="2c00ab2a0f29c7f54a3a5bcbe15f2dad24fec78078a3656baa2217c748314ff0" exitCode=0 Nov 24 12:17:56 crc kubenswrapper[4930]: I1124 12:17:56.923734 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a17-account-create-6h9tj" event={"ID":"85937f48-37f8-4673-ad68-91c8b5f10a8e","Type":"ContainerDied","Data":"2c00ab2a0f29c7f54a3a5bcbe15f2dad24fec78078a3656baa2217c748314ff0"} Nov 24 12:17:56 crc kubenswrapper[4930]: I1124 12:17:56.925789 4930 generic.go:334] "Generic (PLEG): container finished" podID="054173d4-2a3d-45a3-bb82-de2c7afc4316" containerID="0b1468d4445c9a3e7f6ce5bbe03a1915b9db3393bbabead6b2be554463fc2185" exitCode=0 Nov 24 12:17:56 crc kubenswrapper[4930]: I1124 12:17:56.925855 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5302-account-create-kthkc" event={"ID":"054173d4-2a3d-45a3-bb82-de2c7afc4316","Type":"ContainerDied","Data":"0b1468d4445c9a3e7f6ce5bbe03a1915b9db3393bbabead6b2be554463fc2185"} Nov 24 12:17:56 crc kubenswrapper[4930]: I1124 12:17:56.928270 4930 generic.go:334] "Generic (PLEG): container finished" podID="a9988481-8889-4845-a558-9a3fa4f14322" containerID="aa919e006509b95e84c3f86836308d4e51d895cadcd3e5f57a1609f95dbb352f" exitCode=0 Nov 24 12:17:56 crc kubenswrapper[4930]: I1124 12:17:56.928310 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4hgkg" event={"ID":"a9988481-8889-4845-a558-9a3fa4f14322","Type":"ContainerDied","Data":"aa919e006509b95e84c3f86836308d4e51d895cadcd3e5f57a1609f95dbb352f"} Nov 24 12:17:56 crc kubenswrapper[4930]: I1124 12:17:56.931964 4930 generic.go:334] "Generic (PLEG): container finished" podID="25d0ae65-ed30-465d-a12a-65394f309c5a" containerID="fcccdb2f5d804cacd5cddd4207cdba8a67cbe037981a7270ef51b3186efd8502" exitCode=0 Nov 24 12:17:56 crc kubenswrapper[4930]: I1124 12:17:56.932049 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vx4m5" event={"ID":"25d0ae65-ed30-465d-a12a-65394f309c5a","Type":"ContainerDied","Data":"fcccdb2f5d804cacd5cddd4207cdba8a67cbe037981a7270ef51b3186efd8502"} Nov 24 12:17:56 crc kubenswrapper[4930]: I1124 12:17:56.940883 4930 generic.go:334] "Generic (PLEG): container finished" podID="2a3ef300-b344-4fba-a285-f85430bccd47" containerID="14561a44121a426c095fdbca63b4658747afaef6e840a09db2b64e0512cc96ce" exitCode=0 Nov 24 12:17:56 crc kubenswrapper[4930]: I1124 12:17:56.941264 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-16c3-account-create-589jb" event={"ID":"2a3ef300-b344-4fba-a285-f85430bccd47","Type":"ContainerDied","Data":"14561a44121a426c095fdbca63b4658747afaef6e840a09db2b64e0512cc96ce"} Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.353953 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d47hr" Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.443656 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a37265e-62d9-4ebc-9793-aed961e89590-operator-scripts\") pod \"8a37265e-62d9-4ebc-9793-aed961e89590\" (UID: \"8a37265e-62d9-4ebc-9793-aed961e89590\") " Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.444101 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8z6s\" (UniqueName: \"kubernetes.io/projected/8a37265e-62d9-4ebc-9793-aed961e89590-kube-api-access-n8z6s\") pod \"8a37265e-62d9-4ebc-9793-aed961e89590\" (UID: \"8a37265e-62d9-4ebc-9793-aed961e89590\") " Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.444515 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a37265e-62d9-4ebc-9793-aed961e89590-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a37265e-62d9-4ebc-9793-aed961e89590" (UID: "8a37265e-62d9-4ebc-9793-aed961e89590"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.445068 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a37265e-62d9-4ebc-9793-aed961e89590-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.449099 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a37265e-62d9-4ebc-9793-aed961e89590-kube-api-access-n8z6s" (OuterVolumeSpecName: "kube-api-access-n8z6s") pod "8a37265e-62d9-4ebc-9793-aed961e89590" (UID: "8a37265e-62d9-4ebc-9793-aed961e89590"). InnerVolumeSpecName "kube-api-access-n8z6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.546369 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8z6s\" (UniqueName: \"kubernetes.io/projected/8a37265e-62d9-4ebc-9793-aed961e89590-kube-api-access-n8z6s\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.950594 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d47hr" Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.950650 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d47hr" event={"ID":"8a37265e-62d9-4ebc-9793-aed961e89590","Type":"ContainerDied","Data":"1607540e9b6b6024dbcc33c83039ad2ca2f16e63d5accc5d86bda6e01681bc89"} Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.950688 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1607540e9b6b6024dbcc33c83039ad2ca2f16e63d5accc5d86bda6e01681bc89" Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.960834 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dc67723-020c-471b-9834-f0dda7578d11","Type":"ContainerStarted","Data":"d161937b89f55d327c51ae27ae4372b4ce36bc7a7a011d1bdad744b31ee9e45a"} Nov 24 12:17:57 crc kubenswrapper[4930]: I1124 12:17:57.996088 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9965654390000003 podStartE2EDuration="6.996072516s" podCreationTimestamp="2025-11-24 12:17:51 +0000 UTC" firstStartedPulling="2025-11-24 12:17:52.908753122 +0000 UTC m=+1119.523081072" lastFinishedPulling="2025-11-24 12:17:56.908260199 +0000 UTC m=+1123.522588149" observedRunningTime="2025-11-24 12:17:57.99482712 +0000 UTC m=+1124.609155080" watchObservedRunningTime="2025-11-24 12:17:57.996072516 +0000 UTC m=+1124.610400466" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.453785 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-16c3-account-create-589jb" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.568820 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a3ef300-b344-4fba-a285-f85430bccd47-operator-scripts\") pod \"2a3ef300-b344-4fba-a285-f85430bccd47\" (UID: \"2a3ef300-b344-4fba-a285-f85430bccd47\") " Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.569053 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q8jf\" (UniqueName: \"kubernetes.io/projected/2a3ef300-b344-4fba-a285-f85430bccd47-kube-api-access-9q8jf\") pod \"2a3ef300-b344-4fba-a285-f85430bccd47\" (UID: \"2a3ef300-b344-4fba-a285-f85430bccd47\") " Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.569447 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3ef300-b344-4fba-a285-f85430bccd47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a3ef300-b344-4fba-a285-f85430bccd47" (UID: "2a3ef300-b344-4fba-a285-f85430bccd47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.576785 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3ef300-b344-4fba-a285-f85430bccd47-kube-api-access-9q8jf" (OuterVolumeSpecName: "kube-api-access-9q8jf") pod "2a3ef300-b344-4fba-a285-f85430bccd47" (UID: "2a3ef300-b344-4fba-a285-f85430bccd47"). InnerVolumeSpecName "kube-api-access-9q8jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.671110 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q8jf\" (UniqueName: \"kubernetes.io/projected/2a3ef300-b344-4fba-a285-f85430bccd47-kube-api-access-9q8jf\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.671155 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a3ef300-b344-4fba-a285-f85430bccd47-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.721899 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a17-account-create-6h9tj" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.730777 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5302-account-create-kthkc" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.736204 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4hgkg" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.743813 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vx4m5" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.772129 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk9mh\" (UniqueName: \"kubernetes.io/projected/85937f48-37f8-4673-ad68-91c8b5f10a8e-kube-api-access-rk9mh\") pod \"85937f48-37f8-4673-ad68-91c8b5f10a8e\" (UID: \"85937f48-37f8-4673-ad68-91c8b5f10a8e\") " Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.772430 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85937f48-37f8-4673-ad68-91c8b5f10a8e-operator-scripts\") pod \"85937f48-37f8-4673-ad68-91c8b5f10a8e\" (UID: \"85937f48-37f8-4673-ad68-91c8b5f10a8e\") " Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.772869 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85937f48-37f8-4673-ad68-91c8b5f10a8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85937f48-37f8-4673-ad68-91c8b5f10a8e" (UID: "85937f48-37f8-4673-ad68-91c8b5f10a8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.773851 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85937f48-37f8-4673-ad68-91c8b5f10a8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.780301 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85937f48-37f8-4673-ad68-91c8b5f10a8e-kube-api-access-rk9mh" (OuterVolumeSpecName: "kube-api-access-rk9mh") pod "85937f48-37f8-4673-ad68-91c8b5f10a8e" (UID: "85937f48-37f8-4673-ad68-91c8b5f10a8e"). InnerVolumeSpecName "kube-api-access-rk9mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.875530 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9f6t\" (UniqueName: \"kubernetes.io/projected/25d0ae65-ed30-465d-a12a-65394f309c5a-kube-api-access-x9f6t\") pod \"25d0ae65-ed30-465d-a12a-65394f309c5a\" (UID: \"25d0ae65-ed30-465d-a12a-65394f309c5a\") " Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.875659 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86pwh\" (UniqueName: \"kubernetes.io/projected/054173d4-2a3d-45a3-bb82-de2c7afc4316-kube-api-access-86pwh\") pod \"054173d4-2a3d-45a3-bb82-de2c7afc4316\" (UID: \"054173d4-2a3d-45a3-bb82-de2c7afc4316\") " Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.875706 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kvb7\" (UniqueName: \"kubernetes.io/projected/a9988481-8889-4845-a558-9a3fa4f14322-kube-api-access-6kvb7\") pod \"a9988481-8889-4845-a558-9a3fa4f14322\" (UID: \"a9988481-8889-4845-a558-9a3fa4f14322\") " Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.875741 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054173d4-2a3d-45a3-bb82-de2c7afc4316-operator-scripts\") pod \"054173d4-2a3d-45a3-bb82-de2c7afc4316\" (UID: \"054173d4-2a3d-45a3-bb82-de2c7afc4316\") " Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.875780 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9988481-8889-4845-a558-9a3fa4f14322-operator-scripts\") pod \"a9988481-8889-4845-a558-9a3fa4f14322\" (UID: \"a9988481-8889-4845-a558-9a3fa4f14322\") " Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.875843 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d0ae65-ed30-465d-a12a-65394f309c5a-operator-scripts\") pod \"25d0ae65-ed30-465d-a12a-65394f309c5a\" (UID: \"25d0ae65-ed30-465d-a12a-65394f309c5a\") " Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.876485 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/054173d4-2a3d-45a3-bb82-de2c7afc4316-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "054173d4-2a3d-45a3-bb82-de2c7afc4316" (UID: "054173d4-2a3d-45a3-bb82-de2c7afc4316"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.876717 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25d0ae65-ed30-465d-a12a-65394f309c5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25d0ae65-ed30-465d-a12a-65394f309c5a" (UID: "25d0ae65-ed30-465d-a12a-65394f309c5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.877118 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk9mh\" (UniqueName: \"kubernetes.io/projected/85937f48-37f8-4673-ad68-91c8b5f10a8e-kube-api-access-rk9mh\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.877143 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054173d4-2a3d-45a3-bb82-de2c7afc4316-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.877155 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9988481-8889-4845-a558-9a3fa4f14322-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9988481-8889-4845-a558-9a3fa4f14322" (UID: "a9988481-8889-4845-a558-9a3fa4f14322"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.877191 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d0ae65-ed30-465d-a12a-65394f309c5a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.880331 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054173d4-2a3d-45a3-bb82-de2c7afc4316-kube-api-access-86pwh" (OuterVolumeSpecName: "kube-api-access-86pwh") pod "054173d4-2a3d-45a3-bb82-de2c7afc4316" (UID: "054173d4-2a3d-45a3-bb82-de2c7afc4316"). InnerVolumeSpecName "kube-api-access-86pwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.880639 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d0ae65-ed30-465d-a12a-65394f309c5a-kube-api-access-x9f6t" (OuterVolumeSpecName: "kube-api-access-x9f6t") pod "25d0ae65-ed30-465d-a12a-65394f309c5a" (UID: "25d0ae65-ed30-465d-a12a-65394f309c5a"). InnerVolumeSpecName "kube-api-access-x9f6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.881863 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9988481-8889-4845-a558-9a3fa4f14322-kube-api-access-6kvb7" (OuterVolumeSpecName: "kube-api-access-6kvb7") pod "a9988481-8889-4845-a558-9a3fa4f14322" (UID: "a9988481-8889-4845-a558-9a3fa4f14322"). InnerVolumeSpecName "kube-api-access-6kvb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.978967 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9f6t\" (UniqueName: \"kubernetes.io/projected/25d0ae65-ed30-465d-a12a-65394f309c5a-kube-api-access-x9f6t\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.979291 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86pwh\" (UniqueName: \"kubernetes.io/projected/054173d4-2a3d-45a3-bb82-de2c7afc4316-kube-api-access-86pwh\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.979303 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kvb7\" (UniqueName: \"kubernetes.io/projected/a9988481-8889-4845-a558-9a3fa4f14322-kube-api-access-6kvb7\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.979315 4930 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9988481-8889-4845-a558-9a3fa4f14322-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.986600 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vx4m5" event={"ID":"25d0ae65-ed30-465d-a12a-65394f309c5a","Type":"ContainerDied","Data":"c8e8784eaf5ef2ca1477e01a924d3935dc826f43212a46d72fceabcdc1f9b218"} Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.986629 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vx4m5" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.986646 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8e8784eaf5ef2ca1477e01a924d3935dc826f43212a46d72fceabcdc1f9b218" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.996888 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-16c3-account-create-589jb" Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.996899 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-16c3-account-create-589jb" event={"ID":"2a3ef300-b344-4fba-a285-f85430bccd47","Type":"ContainerDied","Data":"f861d0f6dd66fcc86282c464d80c5dc38bd9fce7d0f65eabcc7e687a822949f3"} Nov 24 12:17:58 crc kubenswrapper[4930]: I1124 12:17:58.996953 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f861d0f6dd66fcc86282c464d80c5dc38bd9fce7d0f65eabcc7e687a822949f3" Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.004468 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a17-account-create-6h9tj" event={"ID":"85937f48-37f8-4673-ad68-91c8b5f10a8e","Type":"ContainerDied","Data":"d012bb244b58df7f62fb003f26a6f2419bd7079d36485561a3d8eead34d79885"} Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.004506 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d012bb244b58df7f62fb003f26a6f2419bd7079d36485561a3d8eead34d79885" Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.004607 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a17-account-create-6h9tj" Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.008575 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5302-account-create-kthkc" event={"ID":"054173d4-2a3d-45a3-bb82-de2c7afc4316","Type":"ContainerDied","Data":"bec1b55d29c76d4556639ba6d577917c8b0f31e07f062591e4ee7cc5e148f34f"} Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.008624 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bec1b55d29c76d4556639ba6d577917c8b0f31e07f062591e4ee7cc5e148f34f" Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.008593 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5302-account-create-kthkc" Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.011626 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4hgkg" Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.011702 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4hgkg" event={"ID":"a9988481-8889-4845-a558-9a3fa4f14322","Type":"ContainerDied","Data":"75efba0be8c0c8e33bed9aa6315e8d73ac74b3e8070744b42abb2e750eaca706"} Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.011736 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75efba0be8c0c8e33bed9aa6315e8d73ac74b3e8070744b42abb2e750eaca706" Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.012717 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.464332 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f4c64f46c-fdhkr" Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.723921 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.724198 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f0248953-855e-4f5c-9811-b893580d90cd" containerName="glance-log" containerID="cri-o://5a8fa8bc9d5f4ca0d5e698e0812e92307719280a80a069cb7ab6620d8da8441d" gracePeriod=30 Nov 24 12:17:59 crc kubenswrapper[4930]: I1124 12:17:59.724299 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f0248953-855e-4f5c-9811-b893580d90cd" containerName="glance-httpd" containerID="cri-o://c1aad06afd7954be4c177cdb6633ffc0943fad57338d34a101c37fcbe3c54083" gracePeriod=30 Nov 24 12:18:00 crc kubenswrapper[4930]: I1124 12:18:00.022288 4930 generic.go:334] "Generic (PLEG): container finished" podID="f0248953-855e-4f5c-9811-b893580d90cd" containerID="5a8fa8bc9d5f4ca0d5e698e0812e92307719280a80a069cb7ab6620d8da8441d" exitCode=143 Nov 24 12:18:00 crc kubenswrapper[4930]: I1124 12:18:00.022371 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f0248953-855e-4f5c-9811-b893580d90cd","Type":"ContainerDied","Data":"5a8fa8bc9d5f4ca0d5e698e0812e92307719280a80a069cb7ab6620d8da8441d"} Nov 24 12:18:01 crc kubenswrapper[4930]: I1124 12:18:01.189481 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:18:01 crc kubenswrapper[4930]: I1124 12:18:01.191043 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" containerName="glance-httpd" containerID="cri-o://f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322" gracePeriod=30 Nov 24 12:18:01 crc kubenswrapper[4930]: I1124 12:18:01.191393 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" containerName="glance-log" containerID="cri-o://7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729" gracePeriod=30 Nov 24 12:18:01 crc kubenswrapper[4930]: I1124 12:18:01.433255 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:01 crc kubenswrapper[4930]: I1124 12:18:01.433520 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="ceilometer-central-agent" containerID="cri-o://8754c0e5b964ffec47f7684370a09a3cae1453e14c4f3ad1a4f1222dc4820d85" gracePeriod=30 Nov 24 12:18:01 crc kubenswrapper[4930]: I1124 12:18:01.433688 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="proxy-httpd" containerID="cri-o://d161937b89f55d327c51ae27ae4372b4ce36bc7a7a011d1bdad744b31ee9e45a" gracePeriod=30 Nov 24 12:18:01 crc kubenswrapper[4930]: I1124 12:18:01.433709 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="ceilometer-notification-agent" containerID="cri-o://fe3a4026dd42670d8f8e906ada1cf5bc1f6a92c95049d13a007d1df8678f4482" gracePeriod=30 Nov 24 12:18:01 crc kubenswrapper[4930]: I1124 12:18:01.433972 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="sg-core" containerID="cri-o://4f715907858505b36e1af565cf0eb7665b0eeb07799776ce497bb6814d5c9948" gracePeriod=30 Nov 24 12:18:01 crc kubenswrapper[4930]: I1124 12:18:01.804337 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69b96dd4dd-2xcvn" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 24 12:18:01 crc kubenswrapper[4930]: I1124 12:18:01.804796 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.050402 4930 generic.go:334] "Generic (PLEG): container finished" podID="7dc67723-020c-471b-9834-f0dda7578d11" containerID="d161937b89f55d327c51ae27ae4372b4ce36bc7a7a011d1bdad744b31ee9e45a" exitCode=0 Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.050432 4930 generic.go:334] "Generic (PLEG): container finished" podID="7dc67723-020c-471b-9834-f0dda7578d11" containerID="4f715907858505b36e1af565cf0eb7665b0eeb07799776ce497bb6814d5c9948" exitCode=2 Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.050440 4930 generic.go:334] "Generic (PLEG): container finished" podID="7dc67723-020c-471b-9834-f0dda7578d11" containerID="fe3a4026dd42670d8f8e906ada1cf5bc1f6a92c95049d13a007d1df8678f4482" exitCode=0 Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.050448 4930 generic.go:334] "Generic (PLEG): container finished" podID="7dc67723-020c-471b-9834-f0dda7578d11" containerID="8754c0e5b964ffec47f7684370a09a3cae1453e14c4f3ad1a4f1222dc4820d85" exitCode=0 Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.050505 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dc67723-020c-471b-9834-f0dda7578d11","Type":"ContainerDied","Data":"d161937b89f55d327c51ae27ae4372b4ce36bc7a7a011d1bdad744b31ee9e45a"} Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.050530 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dc67723-020c-471b-9834-f0dda7578d11","Type":"ContainerDied","Data":"4f715907858505b36e1af565cf0eb7665b0eeb07799776ce497bb6814d5c9948"} Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.050676 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dc67723-020c-471b-9834-f0dda7578d11","Type":"ContainerDied","Data":"fe3a4026dd42670d8f8e906ada1cf5bc1f6a92c95049d13a007d1df8678f4482"} Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.050691 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dc67723-020c-471b-9834-f0dda7578d11","Type":"ContainerDied","Data":"8754c0e5b964ffec47f7684370a09a3cae1453e14c4f3ad1a4f1222dc4820d85"} Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.053612 4930 generic.go:334] "Generic (PLEG): container finished" podID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" containerID="7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729" exitCode=143 Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.053636 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c74d03fb-686f-44a0-9132-02dd2c5d3d46","Type":"ContainerDied","Data":"7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729"} Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.174135 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.242229 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-scripts\") pod \"7dc67723-020c-471b-9834-f0dda7578d11\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.242317 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-log-httpd\") pod \"7dc67723-020c-471b-9834-f0dda7578d11\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.242518 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-config-data\") pod \"7dc67723-020c-471b-9834-f0dda7578d11\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.242661 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjt9k\" (UniqueName: \"kubernetes.io/projected/7dc67723-020c-471b-9834-f0dda7578d11-kube-api-access-fjt9k\") pod \"7dc67723-020c-471b-9834-f0dda7578d11\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.242690 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-combined-ca-bundle\") pod \"7dc67723-020c-471b-9834-f0dda7578d11\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.242736 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-run-httpd\") pod \"7dc67723-020c-471b-9834-f0dda7578d11\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.242776 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-sg-core-conf-yaml\") pod \"7dc67723-020c-471b-9834-f0dda7578d11\" (UID: \"7dc67723-020c-471b-9834-f0dda7578d11\") " Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.243973 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7dc67723-020c-471b-9834-f0dda7578d11" (UID: "7dc67723-020c-471b-9834-f0dda7578d11"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.244102 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7dc67723-020c-471b-9834-f0dda7578d11" (UID: "7dc67723-020c-471b-9834-f0dda7578d11"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.249188 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc67723-020c-471b-9834-f0dda7578d11-kube-api-access-fjt9k" (OuterVolumeSpecName: "kube-api-access-fjt9k") pod "7dc67723-020c-471b-9834-f0dda7578d11" (UID: "7dc67723-020c-471b-9834-f0dda7578d11"). InnerVolumeSpecName "kube-api-access-fjt9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.249849 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-scripts" (OuterVolumeSpecName: "scripts") pod "7dc67723-020c-471b-9834-f0dda7578d11" (UID: "7dc67723-020c-471b-9834-f0dda7578d11"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.275703 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7dc67723-020c-471b-9834-f0dda7578d11" (UID: "7dc67723-020c-471b-9834-f0dda7578d11"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.336526 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dc67723-020c-471b-9834-f0dda7578d11" (UID: "7dc67723-020c-471b-9834-f0dda7578d11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.345035 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjt9k\" (UniqueName: \"kubernetes.io/projected/7dc67723-020c-471b-9834-f0dda7578d11-kube-api-access-fjt9k\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.345062 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.345071 4930 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.345080 4930 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.345089 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.345098 4930 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dc67723-020c-471b-9834-f0dda7578d11-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.352207 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-config-data" (OuterVolumeSpecName: "config-data") pod "7dc67723-020c-471b-9834-f0dda7578d11" (UID: "7dc67723-020c-471b-9834-f0dda7578d11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:02 crc kubenswrapper[4930]: I1124 12:18:02.446882 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc67723-020c-471b-9834-f0dda7578d11-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.068757 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dc67723-020c-471b-9834-f0dda7578d11","Type":"ContainerDied","Data":"3a1d0a00bb4701dd963b3f10dcbae7579b25bc63a9aaae1a96faae47e74dda79"} Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.068812 4930 scope.go:117] "RemoveContainer" containerID="d161937b89f55d327c51ae27ae4372b4ce36bc7a7a011d1bdad744b31ee9e45a" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.068882 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.072203 4930 generic.go:334] "Generic (PLEG): container finished" podID="f0248953-855e-4f5c-9811-b893580d90cd" containerID="c1aad06afd7954be4c177cdb6633ffc0943fad57338d34a101c37fcbe3c54083" exitCode=0 Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.072253 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f0248953-855e-4f5c-9811-b893580d90cd","Type":"ContainerDied","Data":"c1aad06afd7954be4c177cdb6633ffc0943fad57338d34a101c37fcbe3c54083"} Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.100665 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.101828 4930 scope.go:117] "RemoveContainer" containerID="4f715907858505b36e1af565cf0eb7665b0eeb07799776ce497bb6814d5c9948" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.109312 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.131906 4930 scope.go:117] "RemoveContainer" containerID="fe3a4026dd42670d8f8e906ada1cf5bc1f6a92c95049d13a007d1df8678f4482" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.151788 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:03 crc kubenswrapper[4930]: E1124 12:18:03.152264 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85937f48-37f8-4673-ad68-91c8b5f10a8e" containerName="mariadb-account-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152476 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="85937f48-37f8-4673-ad68-91c8b5f10a8e" containerName="mariadb-account-create" Nov 24 12:18:03 crc kubenswrapper[4930]: E1124 12:18:03.152496 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9988481-8889-4845-a558-9a3fa4f14322" containerName="mariadb-database-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152504 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9988481-8889-4845-a558-9a3fa4f14322" containerName="mariadb-database-create" Nov 24 12:18:03 crc kubenswrapper[4930]: E1124 12:18:03.152521 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3ef300-b344-4fba-a285-f85430bccd47" containerName="mariadb-account-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152530 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3ef300-b344-4fba-a285-f85430bccd47" containerName="mariadb-account-create" Nov 24 12:18:03 crc kubenswrapper[4930]: E1124 12:18:03.152561 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="ceilometer-central-agent" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152572 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="ceilometer-central-agent" Nov 24 12:18:03 crc kubenswrapper[4930]: E1124 12:18:03.152585 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d0ae65-ed30-465d-a12a-65394f309c5a" containerName="mariadb-database-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152593 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d0ae65-ed30-465d-a12a-65394f309c5a" containerName="mariadb-database-create" Nov 24 12:18:03 crc kubenswrapper[4930]: E1124 12:18:03.152626 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a37265e-62d9-4ebc-9793-aed961e89590" containerName="mariadb-database-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152635 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a37265e-62d9-4ebc-9793-aed961e89590" containerName="mariadb-database-create" Nov 24 12:18:03 crc kubenswrapper[4930]: E1124 12:18:03.152648 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="sg-core" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152657 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="sg-core" Nov 24 12:18:03 crc kubenswrapper[4930]: E1124 12:18:03.152668 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054173d4-2a3d-45a3-bb82-de2c7afc4316" containerName="mariadb-account-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152675 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="054173d4-2a3d-45a3-bb82-de2c7afc4316" containerName="mariadb-account-create" Nov 24 12:18:03 crc kubenswrapper[4930]: E1124 12:18:03.152694 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="ceilometer-notification-agent" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152702 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="ceilometer-notification-agent" Nov 24 12:18:03 crc kubenswrapper[4930]: E1124 12:18:03.152716 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="proxy-httpd" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152722 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="proxy-httpd" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152933 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="sg-core" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152954 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="054173d4-2a3d-45a3-bb82-de2c7afc4316" containerName="mariadb-account-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152962 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="ceilometer-notification-agent" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152977 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="ceilometer-central-agent" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.152992 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9988481-8889-4845-a558-9a3fa4f14322" containerName="mariadb-database-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.153037 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d0ae65-ed30-465d-a12a-65394f309c5a" containerName="mariadb-database-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.153051 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="85937f48-37f8-4673-ad68-91c8b5f10a8e" containerName="mariadb-account-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.153065 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a37265e-62d9-4ebc-9793-aed961e89590" containerName="mariadb-database-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.153083 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3ef300-b344-4fba-a285-f85430bccd47" containerName="mariadb-account-create" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.153099 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc67723-020c-471b-9834-f0dda7578d11" containerName="proxy-httpd" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.157331 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.164927 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.165029 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.173254 4930 scope.go:117] "RemoveContainer" containerID="8754c0e5b964ffec47f7684370a09a3cae1453e14c4f3ad1a4f1222dc4820d85" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.193017 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.262932 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-run-httpd\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.263237 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-log-httpd\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.263318 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.263356 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-config-data\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.263405 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzf6g\" (UniqueName: \"kubernetes.io/projected/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-kube-api-access-pzf6g\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.263584 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-scripts\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.263627 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.371065 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-run-httpd\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.372105 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-log-httpd\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.372491 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.374657 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-config-data\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.374857 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzf6g\" (UniqueName: \"kubernetes.io/projected/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-kube-api-access-pzf6g\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.371977 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-run-httpd\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.372434 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-log-httpd\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.375407 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-scripts\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.375915 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.379334 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.379351 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-config-data\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.380648 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.385535 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-scripts\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.397774 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzf6g\" (UniqueName: \"kubernetes.io/projected/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-kube-api-access-pzf6g\") pod \"ceilometer-0\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.477619 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.492262 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.578979 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-httpd-run\") pod \"f0248953-855e-4f5c-9811-b893580d90cd\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.579340 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-public-tls-certs\") pod \"f0248953-855e-4f5c-9811-b893580d90cd\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.579429 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt2v5\" (UniqueName: \"kubernetes.io/projected/f0248953-855e-4f5c-9811-b893580d90cd-kube-api-access-xt2v5\") pod \"f0248953-855e-4f5c-9811-b893580d90cd\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.579460 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f0248953-855e-4f5c-9811-b893580d90cd" (UID: "f0248953-855e-4f5c-9811-b893580d90cd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.579505 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-config-data\") pod \"f0248953-855e-4f5c-9811-b893580d90cd\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.579654 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-combined-ca-bundle\") pod \"f0248953-855e-4f5c-9811-b893580d90cd\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.579767 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f0248953-855e-4f5c-9811-b893580d90cd\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.579803 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-logs\") pod \"f0248953-855e-4f5c-9811-b893580d90cd\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.579879 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-scripts\") pod \"f0248953-855e-4f5c-9811-b893580d90cd\" (UID: \"f0248953-855e-4f5c-9811-b893580d90cd\") " Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.580413 4930 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.586631 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-scripts" (OuterVolumeSpecName: "scripts") pod "f0248953-855e-4f5c-9811-b893580d90cd" (UID: "f0248953-855e-4f5c-9811-b893580d90cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.587555 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-logs" (OuterVolumeSpecName: "logs") pod "f0248953-855e-4f5c-9811-b893580d90cd" (UID: "f0248953-855e-4f5c-9811-b893580d90cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.589693 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "f0248953-855e-4f5c-9811-b893580d90cd" (UID: "f0248953-855e-4f5c-9811-b893580d90cd"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.596687 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0248953-855e-4f5c-9811-b893580d90cd-kube-api-access-xt2v5" (OuterVolumeSpecName: "kube-api-access-xt2v5") pod "f0248953-855e-4f5c-9811-b893580d90cd" (UID: "f0248953-855e-4f5c-9811-b893580d90cd"). InnerVolumeSpecName "kube-api-access-xt2v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.639627 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0248953-855e-4f5c-9811-b893580d90cd" (UID: "f0248953-855e-4f5c-9811-b893580d90cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.656356 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-config-data" (OuterVolumeSpecName: "config-data") pod "f0248953-855e-4f5c-9811-b893580d90cd" (UID: "f0248953-855e-4f5c-9811-b893580d90cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.673515 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f0248953-855e-4f5c-9811-b893580d90cd" (UID: "f0248953-855e-4f5c-9811-b893580d90cd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.682737 4930 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.695056 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0248953-855e-4f5c-9811-b893580d90cd-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.695096 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.695107 4930 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.695121 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt2v5\" (UniqueName: \"kubernetes.io/projected/f0248953-855e-4f5c-9811-b893580d90cd-kube-api-access-xt2v5\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.695130 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.695167 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0248953-855e-4f5c-9811-b893580d90cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.744109 4930 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 12:18:03 crc kubenswrapper[4930]: I1124 12:18:03.797202 4930 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.005024 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.086301 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.101625 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc67723-020c-471b-9834-f0dda7578d11" path="/var/lib/kubelet/pods/7dc67723-020c-471b-9834-f0dda7578d11/volumes" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.102992 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f0248953-855e-4f5c-9811-b893580d90cd","Type":"ContainerDied","Data":"0f4f9e12810db403e0379972e528968fbfa7a6e2720b6040227f6f7b80d613e3"} Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.103034 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae6345a4-b8af-4e5f-a155-02f7d3929aa6","Type":"ContainerStarted","Data":"030905668bdd66b3477dbddce06368d0df13dd5c8bc62d3d832c5fbe4b1b3c89"} Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.103086 4930 scope.go:117] "RemoveContainer" containerID="c1aad06afd7954be4c177cdb6633ffc0943fad57338d34a101c37fcbe3c54083" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.135766 4930 scope.go:117] "RemoveContainer" containerID="5a8fa8bc9d5f4ca0d5e698e0812e92307719280a80a069cb7ab6620d8da8441d" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.161595 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.178222 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.194672 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:18:04 crc kubenswrapper[4930]: E1124 12:18:04.195085 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0248953-855e-4f5c-9811-b893580d90cd" containerName="glance-log" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.195105 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0248953-855e-4f5c-9811-b893580d90cd" containerName="glance-log" Nov 24 12:18:04 crc kubenswrapper[4930]: E1124 12:18:04.195117 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0248953-855e-4f5c-9811-b893580d90cd" containerName="glance-httpd" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.195124 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0248953-855e-4f5c-9811-b893580d90cd" containerName="glance-httpd" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.195332 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0248953-855e-4f5c-9811-b893580d90cd" containerName="glance-httpd" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.195358 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0248953-855e-4f5c-9811-b893580d90cd" containerName="glance-log" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.196364 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.206223 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.206328 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.206619 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.308945 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.308998 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.309041 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-config-data\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.309074 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsmnb\" (UniqueName: \"kubernetes.io/projected/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-kube-api-access-tsmnb\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.309335 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-scripts\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.309379 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.309415 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-logs\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.309443 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.411529 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-scripts\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.411598 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.411631 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-logs\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.411658 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.411712 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.411734 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.411773 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-config-data\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.411803 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsmnb\" (UniqueName: \"kubernetes.io/projected/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-kube-api-access-tsmnb\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.413231 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.413531 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-logs\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.413847 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.419158 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.419834 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-scripts\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.425132 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.426224 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-config-data\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.436202 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsmnb\" (UniqueName: \"kubernetes.io/projected/368b80c7-cc7d-4d6a-8b4d-90ea32596bf9-kube-api-access-tsmnb\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.440646 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9\") " pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.522888 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.726645 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xj8hp"] Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.735322 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.742039 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.742096 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-p4mv2" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.742485 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.753063 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xj8hp"] Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.826732 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.827213 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tp8q\" (UniqueName: \"kubernetes.io/projected/58c592a3-0b0c-45e5-a53e-2a672e3ce388-kube-api-access-4tp8q\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.827267 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-config-data\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.827329 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-scripts\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.935799 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tp8q\" (UniqueName: \"kubernetes.io/projected/58c592a3-0b0c-45e5-a53e-2a672e3ce388-kube-api-access-4tp8q\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.935875 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-config-data\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.935926 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-scripts\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.936016 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.950243 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-scripts\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.950710 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-config-data\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.951607 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:04 crc kubenswrapper[4930]: I1124 12:18:04.961720 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tp8q\" (UniqueName: \"kubernetes.io/projected/58c592a3-0b0c-45e5-a53e-2a672e3ce388-kube-api-access-4tp8q\") pod \"nova-cell0-conductor-db-sync-xj8hp\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.010468 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.099562 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.115704 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae6345a4-b8af-4e5f-a155-02f7d3929aa6","Type":"ContainerStarted","Data":"284981a1397c1cebae549ae8d751c0e651ec91fd6a7d36e6cdbae558569133da"} Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.119088 4930 generic.go:334] "Generic (PLEG): container finished" podID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" containerID="f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322" exitCode=0 Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.119117 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c74d03fb-686f-44a0-9132-02dd2c5d3d46","Type":"ContainerDied","Data":"f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322"} Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.119140 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c74d03fb-686f-44a0-9132-02dd2c5d3d46","Type":"ContainerDied","Data":"c94a3cd6beb3a1539344aaedc849ece766e05111cdbb762e6ed228bce00d37c3"} Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.119162 4930 scope.go:117] "RemoveContainer" containerID="f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.119325 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.152049 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-httpd-run\") pod \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.152111 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-config-data\") pod \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.152131 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-internal-tls-certs\") pod \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.152150 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-scripts\") pod \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.152182 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.152211 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l47sj\" (UniqueName: \"kubernetes.io/projected/c74d03fb-686f-44a0-9132-02dd2c5d3d46-kube-api-access-l47sj\") pod \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.152309 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-combined-ca-bundle\") pod \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.152373 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-logs\") pod \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\" (UID: \"c74d03fb-686f-44a0-9132-02dd2c5d3d46\") " Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.152728 4930 scope.go:117] "RemoveContainer" containerID="7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.152809 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c74d03fb-686f-44a0-9132-02dd2c5d3d46" (UID: "c74d03fb-686f-44a0-9132-02dd2c5d3d46"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.153214 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-logs" (OuterVolumeSpecName: "logs") pod "c74d03fb-686f-44a0-9132-02dd2c5d3d46" (UID: "c74d03fb-686f-44a0-9132-02dd2c5d3d46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.156973 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "c74d03fb-686f-44a0-9132-02dd2c5d3d46" (UID: "c74d03fb-686f-44a0-9132-02dd2c5d3d46"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.159490 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-scripts" (OuterVolumeSpecName: "scripts") pod "c74d03fb-686f-44a0-9132-02dd2c5d3d46" (UID: "c74d03fb-686f-44a0-9132-02dd2c5d3d46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.160380 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74d03fb-686f-44a0-9132-02dd2c5d3d46-kube-api-access-l47sj" (OuterVolumeSpecName: "kube-api-access-l47sj") pod "c74d03fb-686f-44a0-9132-02dd2c5d3d46" (UID: "c74d03fb-686f-44a0-9132-02dd2c5d3d46"). InnerVolumeSpecName "kube-api-access-l47sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.183854 4930 scope.go:117] "RemoveContainer" containerID="f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322" Nov 24 12:18:05 crc kubenswrapper[4930]: E1124 12:18:05.186178 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322\": container with ID starting with f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322 not found: ID does not exist" containerID="f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.186387 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322"} err="failed to get container status \"f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322\": rpc error: code = NotFound desc = could not find container \"f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322\": container with ID starting with f3b4a4d6bd8138c5b5da28bb0b3c0eea4f5f015a304ef49243235095ef3a0322 not found: ID does not exist" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.186899 4930 scope.go:117] "RemoveContainer" containerID="7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729" Nov 24 12:18:05 crc kubenswrapper[4930]: E1124 12:18:05.188083 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729\": container with ID starting with 7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729 not found: ID does not exist" containerID="7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.188128 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729"} err="failed to get container status \"7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729\": rpc error: code = NotFound desc = could not find container \"7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729\": container with ID starting with 7f8a3d7ed994aa71f942f412422c941ae6e52fb01d4d6018b5b34c5dce804729 not found: ID does not exist" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.206107 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c74d03fb-686f-44a0-9132-02dd2c5d3d46" (UID: "c74d03fb-686f-44a0-9132-02dd2c5d3d46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.233191 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-config-data" (OuterVolumeSpecName: "config-data") pod "c74d03fb-686f-44a0-9132-02dd2c5d3d46" (UID: "c74d03fb-686f-44a0-9132-02dd2c5d3d46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.242826 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c74d03fb-686f-44a0-9132-02dd2c5d3d46" (UID: "c74d03fb-686f-44a0-9132-02dd2c5d3d46"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.254808 4930 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.254846 4930 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.254858 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.254870 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.254892 4930 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.254904 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l47sj\" (UniqueName: \"kubernetes.io/projected/c74d03fb-686f-44a0-9132-02dd2c5d3d46-kube-api-access-l47sj\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.254915 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c74d03fb-686f-44a0-9132-02dd2c5d3d46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.254925 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c74d03fb-686f-44a0-9132-02dd2c5d3d46-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.285676 4930 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.336524 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.358418 4930 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.610649 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.626490 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.652008 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:18:05 crc kubenswrapper[4930]: E1124 12:18:05.652489 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" containerName="glance-log" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.652511 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" containerName="glance-log" Nov 24 12:18:05 crc kubenswrapper[4930]: E1124 12:18:05.652585 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" containerName="glance-httpd" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.652592 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" containerName="glance-httpd" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.652775 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" containerName="glance-httpd" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.652789 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" containerName="glance-log" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.653751 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.664272 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.664451 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.692316 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.713195 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.713322 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.713355 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnwxd\" (UniqueName: \"kubernetes.io/projected/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-kube-api-access-wnwxd\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.713504 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.713532 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.713733 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.713762 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.713784 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.719036 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xj8hp"] Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.816056 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.816239 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.816284 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.816309 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.816368 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.816429 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.816459 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnwxd\" (UniqueName: \"kubernetes.io/projected/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-kube-api-access-wnwxd\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.816565 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.817109 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.817428 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.817947 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.824418 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.825793 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.834683 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.837816 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnwxd\" (UniqueName: \"kubernetes.io/projected/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-kube-api-access-wnwxd\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.838824 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:05 crc kubenswrapper[4930]: I1124 12:18:05.861209 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:18:06 crc kubenswrapper[4930]: I1124 12:18:06.014600 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:06 crc kubenswrapper[4930]: I1124 12:18:06.038698 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:06 crc kubenswrapper[4930]: I1124 12:18:06.119095 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c74d03fb-686f-44a0-9132-02dd2c5d3d46" path="/var/lib/kubelet/pods/c74d03fb-686f-44a0-9132-02dd2c5d3d46/volumes" Nov 24 12:18:06 crc kubenswrapper[4930]: I1124 12:18:06.122000 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0248953-855e-4f5c-9811-b893580d90cd" path="/var/lib/kubelet/pods/f0248953-855e-4f5c-9811-b893580d90cd/volumes" Nov 24 12:18:06 crc kubenswrapper[4930]: I1124 12:18:06.137388 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae6345a4-b8af-4e5f-a155-02f7d3929aa6","Type":"ContainerStarted","Data":"c4846387321b1ac0484c6cee4d32d2ffee18cd0e70220c0fb432dfaa72d23c77"} Nov 24 12:18:06 crc kubenswrapper[4930]: I1124 12:18:06.167025 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xj8hp" event={"ID":"58c592a3-0b0c-45e5-a53e-2a672e3ce388","Type":"ContainerStarted","Data":"1c329ac33f1b65bb21761232374171b69e725c26e6ef01dff0ce30d451fa26c9"} Nov 24 12:18:06 crc kubenswrapper[4930]: I1124 12:18:06.179123 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9","Type":"ContainerStarted","Data":"3933cad7f3c85700d338e8950f0517b6a4fac5d48dd86736a544ad018ea6eb32"} Nov 24 12:18:06 crc kubenswrapper[4930]: I1124 12:18:06.631005 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.181514 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.198793 4930 generic.go:334] "Generic (PLEG): container finished" podID="dc1269fb-938b-4634-a683-9b0375e01915" containerID="8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90" exitCode=137 Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.198864 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69b96dd4dd-2xcvn" event={"ID":"dc1269fb-938b-4634-a683-9b0375e01915","Type":"ContainerDied","Data":"8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90"} Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.198900 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69b96dd4dd-2xcvn" event={"ID":"dc1269fb-938b-4634-a683-9b0375e01915","Type":"ContainerDied","Data":"a8f420061cd09591125744b746e129818673889aba214fb24b5bd517be9125c0"} Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.198918 4930 scope.go:117] "RemoveContainer" containerID="1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.199051 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69b96dd4dd-2xcvn" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.219753 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae6345a4-b8af-4e5f-a155-02f7d3929aa6","Type":"ContainerStarted","Data":"602a8a65ceb2ba9cee27649f192de10279d35fc1a1b01c5f47df78e7ff4eafdb"} Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.225110 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9","Type":"ContainerStarted","Data":"b91f6098a1b6a6331776a6876b7a98c6cd1aa0f283e87f22454d61a7f7870d30"} Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.225172 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"368b80c7-cc7d-4d6a-8b4d-90ea32596bf9","Type":"ContainerStarted","Data":"520e5cb03a428b393d21c870441451c5f1ed8e53a1c26f042d907c54915570c0"} Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.229165 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3","Type":"ContainerStarted","Data":"1c21a964dbd7b79370385ce7c6f0d45c7d4ef6cc850520811953416b35fc284b"} Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.267925 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.267891096 podStartE2EDuration="3.267891096s" podCreationTimestamp="2025-11-24 12:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:07.249008723 +0000 UTC m=+1133.863336673" watchObservedRunningTime="2025-11-24 12:18:07.267891096 +0000 UTC m=+1133.882219046" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.349154 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-config-data\") pod \"dc1269fb-938b-4634-a683-9b0375e01915\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.349276 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khmhf\" (UniqueName: \"kubernetes.io/projected/dc1269fb-938b-4634-a683-9b0375e01915-kube-api-access-khmhf\") pod \"dc1269fb-938b-4634-a683-9b0375e01915\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.349320 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-scripts\") pod \"dc1269fb-938b-4634-a683-9b0375e01915\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.349358 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-tls-certs\") pod \"dc1269fb-938b-4634-a683-9b0375e01915\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.349401 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-secret-key\") pod \"dc1269fb-938b-4634-a683-9b0375e01915\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.349423 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc1269fb-938b-4634-a683-9b0375e01915-logs\") pod \"dc1269fb-938b-4634-a683-9b0375e01915\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.349499 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-combined-ca-bundle\") pod \"dc1269fb-938b-4634-a683-9b0375e01915\" (UID: \"dc1269fb-938b-4634-a683-9b0375e01915\") " Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.350882 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1269fb-938b-4634-a683-9b0375e01915-logs" (OuterVolumeSpecName: "logs") pod "dc1269fb-938b-4634-a683-9b0375e01915" (UID: "dc1269fb-938b-4634-a683-9b0375e01915"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.362380 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dc1269fb-938b-4634-a683-9b0375e01915" (UID: "dc1269fb-938b-4634-a683-9b0375e01915"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.364401 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc1269fb-938b-4634-a683-9b0375e01915-kube-api-access-khmhf" (OuterVolumeSpecName: "kube-api-access-khmhf") pod "dc1269fb-938b-4634-a683-9b0375e01915" (UID: "dc1269fb-938b-4634-a683-9b0375e01915"). InnerVolumeSpecName "kube-api-access-khmhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.383120 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-scripts" (OuterVolumeSpecName: "scripts") pod "dc1269fb-938b-4634-a683-9b0375e01915" (UID: "dc1269fb-938b-4634-a683-9b0375e01915"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.398690 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc1269fb-938b-4634-a683-9b0375e01915" (UID: "dc1269fb-938b-4634-a683-9b0375e01915"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.407663 4930 scope.go:117] "RemoveContainer" containerID="8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.409686 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-config-data" (OuterVolumeSpecName: "config-data") pod "dc1269fb-938b-4634-a683-9b0375e01915" (UID: "dc1269fb-938b-4634-a683-9b0375e01915"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.452904 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.452939 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.452954 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khmhf\" (UniqueName: \"kubernetes.io/projected/dc1269fb-938b-4634-a683-9b0375e01915-kube-api-access-khmhf\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.452966 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc1269fb-938b-4634-a683-9b0375e01915-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.452977 4930 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.452988 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc1269fb-938b-4634-a683-9b0375e01915-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.456426 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "dc1269fb-938b-4634-a683-9b0375e01915" (UID: "dc1269fb-938b-4634-a683-9b0375e01915"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.554998 4930 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1269fb-938b-4634-a683-9b0375e01915-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.609618 4930 scope.go:117] "RemoveContainer" containerID="1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f" Nov 24 12:18:07 crc kubenswrapper[4930]: E1124 12:18:07.610609 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f\": container with ID starting with 1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f not found: ID does not exist" containerID="1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.610656 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f"} err="failed to get container status \"1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f\": rpc error: code = NotFound desc = could not find container \"1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f\": container with ID starting with 1539e69199a01f99e44f84c909d2ffeda2fc04ba9152bdee9513672ff3c78a9f not found: ID does not exist" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.610684 4930 scope.go:117] "RemoveContainer" containerID="8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90" Nov 24 12:18:07 crc kubenswrapper[4930]: E1124 12:18:07.611191 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90\": container with ID starting with 8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90 not found: ID does not exist" containerID="8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.611246 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90"} err="failed to get container status \"8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90\": rpc error: code = NotFound desc = could not find container \"8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90\": container with ID starting with 8411c6e068774c00acdd68d94c0dff68f5ae5b30a88439215cbadea9e061ca90 not found: ID does not exist" Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.626698 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69b96dd4dd-2xcvn"] Nov 24 12:18:07 crc kubenswrapper[4930]: I1124 12:18:07.634320 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69b96dd4dd-2xcvn"] Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.111767 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc1269fb-938b-4634-a683-9b0375e01915" path="/var/lib/kubelet/pods/dc1269fb-938b-4634-a683-9b0375e01915/volumes" Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.250982 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3","Type":"ContainerStarted","Data":"9efa2af48221c424c3cf6c53f4929ba94c548b35e31be2d41c8ffe3716a2cc10"} Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.251040 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3","Type":"ContainerStarted","Data":"983001e71f5ff7374e7ed5aefb4c82e0c464dba1141f8ebf0e5c3012a055d4c0"} Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.261442 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="ceilometer-central-agent" containerID="cri-o://284981a1397c1cebae549ae8d751c0e651ec91fd6a7d36e6cdbae558569133da" gracePeriod=30 Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.261706 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae6345a4-b8af-4e5f-a155-02f7d3929aa6","Type":"ContainerStarted","Data":"ddc2c9ee52464d3a505cf5a63d6c7d8f7c67c94b4d5964daad386f3e59aab092"} Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.261775 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.261825 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="proxy-httpd" containerID="cri-o://ddc2c9ee52464d3a505cf5a63d6c7d8f7c67c94b4d5964daad386f3e59aab092" gracePeriod=30 Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.261878 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="sg-core" containerID="cri-o://602a8a65ceb2ba9cee27649f192de10279d35fc1a1b01c5f47df78e7ff4eafdb" gracePeriod=30 Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.261959 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="ceilometer-notification-agent" containerID="cri-o://c4846387321b1ac0484c6cee4d32d2ffee18cd0e70220c0fb432dfaa72d23c77" gracePeriod=30 Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.282024 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.281997752 podStartE2EDuration="3.281997752s" podCreationTimestamp="2025-11-24 12:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:08.277669618 +0000 UTC m=+1134.891997568" watchObservedRunningTime="2025-11-24 12:18:08.281997752 +0000 UTC m=+1134.896325702" Nov 24 12:18:08 crc kubenswrapper[4930]: I1124 12:18:08.318335 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.347478323 podStartE2EDuration="5.318308206s" podCreationTimestamp="2025-11-24 12:18:03 +0000 UTC" firstStartedPulling="2025-11-24 12:18:04.010490982 +0000 UTC m=+1130.624818932" lastFinishedPulling="2025-11-24 12:18:07.981320865 +0000 UTC m=+1134.595648815" observedRunningTime="2025-11-24 12:18:08.309457792 +0000 UTC m=+1134.923785742" watchObservedRunningTime="2025-11-24 12:18:08.318308206 +0000 UTC m=+1134.932636156" Nov 24 12:18:09 crc kubenswrapper[4930]: I1124 12:18:09.276285 4930 generic.go:334] "Generic (PLEG): container finished" podID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerID="602a8a65ceb2ba9cee27649f192de10279d35fc1a1b01c5f47df78e7ff4eafdb" exitCode=2 Nov 24 12:18:09 crc kubenswrapper[4930]: I1124 12:18:09.276732 4930 generic.go:334] "Generic (PLEG): container finished" podID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerID="c4846387321b1ac0484c6cee4d32d2ffee18cd0e70220c0fb432dfaa72d23c77" exitCode=0 Nov 24 12:18:09 crc kubenswrapper[4930]: I1124 12:18:09.276352 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae6345a4-b8af-4e5f-a155-02f7d3929aa6","Type":"ContainerDied","Data":"602a8a65ceb2ba9cee27649f192de10279d35fc1a1b01c5f47df78e7ff4eafdb"} Nov 24 12:18:09 crc kubenswrapper[4930]: I1124 12:18:09.276871 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae6345a4-b8af-4e5f-a155-02f7d3929aa6","Type":"ContainerDied","Data":"c4846387321b1ac0484c6cee4d32d2ffee18cd0e70220c0fb432dfaa72d23c77"} Nov 24 12:18:13 crc kubenswrapper[4930]: I1124 12:18:13.329817 4930 generic.go:334] "Generic (PLEG): container finished" podID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerID="284981a1397c1cebae549ae8d751c0e651ec91fd6a7d36e6cdbae558569133da" exitCode=0 Nov 24 12:18:13 crc kubenswrapper[4930]: I1124 12:18:13.329913 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae6345a4-b8af-4e5f-a155-02f7d3929aa6","Type":"ContainerDied","Data":"284981a1397c1cebae549ae8d751c0e651ec91fd6a7d36e6cdbae558569133da"} Nov 24 12:18:14 crc kubenswrapper[4930]: I1124 12:18:14.340831 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xj8hp" event={"ID":"58c592a3-0b0c-45e5-a53e-2a672e3ce388","Type":"ContainerStarted","Data":"c1cbb6ecc6454effac40cf4b3df72296e2d98a939dc097da1c2eea2579427aaf"} Nov 24 12:18:14 crc kubenswrapper[4930]: I1124 12:18:14.360373 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xj8hp" podStartSLOduration=2.004308176 podStartE2EDuration="10.360335957s" podCreationTimestamp="2025-11-24 12:18:04 +0000 UTC" firstStartedPulling="2025-11-24 12:18:05.726099394 +0000 UTC m=+1132.340427344" lastFinishedPulling="2025-11-24 12:18:14.082127175 +0000 UTC m=+1140.696455125" observedRunningTime="2025-11-24 12:18:14.355907489 +0000 UTC m=+1140.970235469" watchObservedRunningTime="2025-11-24 12:18:14.360335957 +0000 UTC m=+1140.974663937" Nov 24 12:18:14 crc kubenswrapper[4930]: I1124 12:18:14.524078 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:18:14 crc kubenswrapper[4930]: I1124 12:18:14.524152 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:18:14 crc kubenswrapper[4930]: I1124 12:18:14.563574 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:18:14 crc kubenswrapper[4930]: I1124 12:18:14.565062 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:18:15 crc kubenswrapper[4930]: I1124 12:18:15.350830 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:18:15 crc kubenswrapper[4930]: I1124 12:18:15.351169 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:18:16 crc kubenswrapper[4930]: I1124 12:18:16.039151 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:16 crc kubenswrapper[4930]: I1124 12:18:16.039226 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:16 crc kubenswrapper[4930]: I1124 12:18:16.077625 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:16 crc kubenswrapper[4930]: I1124 12:18:16.104135 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:16 crc kubenswrapper[4930]: I1124 12:18:16.363959 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:16 crc kubenswrapper[4930]: I1124 12:18:16.364834 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:17 crc kubenswrapper[4930]: I1124 12:18:17.369162 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:18:17 crc kubenswrapper[4930]: I1124 12:18:17.378109 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:18:17 crc kubenswrapper[4930]: I1124 12:18:17.383258 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:18:18 crc kubenswrapper[4930]: I1124 12:18:18.307259 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:18 crc kubenswrapper[4930]: I1124 12:18:18.388769 4930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:18:18 crc kubenswrapper[4930]: I1124 12:18:18.509087 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:18:25 crc kubenswrapper[4930]: I1124 12:18:25.470197 4930 generic.go:334] "Generic (PLEG): container finished" podID="58c592a3-0b0c-45e5-a53e-2a672e3ce388" containerID="c1cbb6ecc6454effac40cf4b3df72296e2d98a939dc097da1c2eea2579427aaf" exitCode=0 Nov 24 12:18:25 crc kubenswrapper[4930]: I1124 12:18:25.470314 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xj8hp" event={"ID":"58c592a3-0b0c-45e5-a53e-2a672e3ce388","Type":"ContainerDied","Data":"c1cbb6ecc6454effac40cf4b3df72296e2d98a939dc097da1c2eea2579427aaf"} Nov 24 12:18:26 crc kubenswrapper[4930]: I1124 12:18:26.847569 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:26 crc kubenswrapper[4930]: I1124 12:18:26.963648 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tp8q\" (UniqueName: \"kubernetes.io/projected/58c592a3-0b0c-45e5-a53e-2a672e3ce388-kube-api-access-4tp8q\") pod \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " Nov 24 12:18:26 crc kubenswrapper[4930]: I1124 12:18:26.963856 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-combined-ca-bundle\") pod \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " Nov 24 12:18:26 crc kubenswrapper[4930]: I1124 12:18:26.963894 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-config-data\") pod \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " Nov 24 12:18:26 crc kubenswrapper[4930]: I1124 12:18:26.963914 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-scripts\") pod \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\" (UID: \"58c592a3-0b0c-45e5-a53e-2a672e3ce388\") " Nov 24 12:18:26 crc kubenswrapper[4930]: I1124 12:18:26.970968 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c592a3-0b0c-45e5-a53e-2a672e3ce388-kube-api-access-4tp8q" (OuterVolumeSpecName: "kube-api-access-4tp8q") pod "58c592a3-0b0c-45e5-a53e-2a672e3ce388" (UID: "58c592a3-0b0c-45e5-a53e-2a672e3ce388"). InnerVolumeSpecName "kube-api-access-4tp8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:26 crc kubenswrapper[4930]: I1124 12:18:26.971554 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-scripts" (OuterVolumeSpecName: "scripts") pod "58c592a3-0b0c-45e5-a53e-2a672e3ce388" (UID: "58c592a3-0b0c-45e5-a53e-2a672e3ce388"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:26 crc kubenswrapper[4930]: I1124 12:18:26.995635 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-config-data" (OuterVolumeSpecName: "config-data") pod "58c592a3-0b0c-45e5-a53e-2a672e3ce388" (UID: "58c592a3-0b0c-45e5-a53e-2a672e3ce388"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.008953 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58c592a3-0b0c-45e5-a53e-2a672e3ce388" (UID: "58c592a3-0b0c-45e5-a53e-2a672e3ce388"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.066085 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.066372 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.066443 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58c592a3-0b0c-45e5-a53e-2a672e3ce388-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.066507 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tp8q\" (UniqueName: \"kubernetes.io/projected/58c592a3-0b0c-45e5-a53e-2a672e3ce388-kube-api-access-4tp8q\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.488654 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xj8hp" event={"ID":"58c592a3-0b0c-45e5-a53e-2a672e3ce388","Type":"ContainerDied","Data":"1c329ac33f1b65bb21761232374171b69e725c26e6ef01dff0ce30d451fa26c9"} Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.489034 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c329ac33f1b65bb21761232374171b69e725c26e6ef01dff0ce30d451fa26c9" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.488740 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xj8hp" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.627052 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 12:18:27 crc kubenswrapper[4930]: E1124 12:18:27.627496 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon-log" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.627515 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon-log" Nov 24 12:18:27 crc kubenswrapper[4930]: E1124 12:18:27.627571 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c592a3-0b0c-45e5-a53e-2a672e3ce388" containerName="nova-cell0-conductor-db-sync" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.627581 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c592a3-0b0c-45e5-a53e-2a672e3ce388" containerName="nova-cell0-conductor-db-sync" Nov 24 12:18:27 crc kubenswrapper[4930]: E1124 12:18:27.627605 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.627613 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.627807 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c592a3-0b0c-45e5-a53e-2a672e3ce388" containerName="nova-cell0-conductor-db-sync" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.627832 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon-log" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.627857 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1269fb-938b-4634-a683-9b0375e01915" containerName="horizon" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.628614 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.635765 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.635928 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-p4mv2" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.656979 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.676815 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.676900 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.676933 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzxr2\" (UniqueName: \"kubernetes.io/projected/3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09-kube-api-access-tzxr2\") pod \"nova-cell0-conductor-0\" (UID: \"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.778491 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.778573 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzxr2\" (UniqueName: \"kubernetes.io/projected/3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09-kube-api-access-tzxr2\") pod \"nova-cell0-conductor-0\" (UID: \"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.778714 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.783826 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.791564 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.798119 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzxr2\" (UniqueName: \"kubernetes.io/projected/3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09-kube-api-access-tzxr2\") pod \"nova-cell0-conductor-0\" (UID: \"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:27 crc kubenswrapper[4930]: I1124 12:18:27.955882 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:28 crc kubenswrapper[4930]: I1124 12:18:28.398765 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 12:18:28 crc kubenswrapper[4930]: I1124 12:18:28.499168 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09","Type":"ContainerStarted","Data":"3d6c70930756e875079745e31a429a843a43fa0895cf9e1f4adbc329a0cf835d"} Nov 24 12:18:29 crc kubenswrapper[4930]: I1124 12:18:29.510045 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09","Type":"ContainerStarted","Data":"071c8c5e8ebc0e8045ac99f215789f4f7bf89e745e242827a0d73b56aa2faac7"} Nov 24 12:18:29 crc kubenswrapper[4930]: I1124 12:18:29.510379 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:29 crc kubenswrapper[4930]: I1124 12:18:29.539639 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.539616528 podStartE2EDuration="2.539616528s" podCreationTimestamp="2025-11-24 12:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:29.535221202 +0000 UTC m=+1156.149549152" watchObservedRunningTime="2025-11-24 12:18:29.539616528 +0000 UTC m=+1156.153944478" Nov 24 12:18:33 crc kubenswrapper[4930]: I1124 12:18:33.496052 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:18:37 crc kubenswrapper[4930]: I1124 12:18:37.981147 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.622801 4930 generic.go:334] "Generic (PLEG): container finished" podID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerID="ddc2c9ee52464d3a505cf5a63d6c7d8f7c67c94b4d5964daad386f3e59aab092" exitCode=137 Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.623265 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae6345a4-b8af-4e5f-a155-02f7d3929aa6","Type":"ContainerDied","Data":"ddc2c9ee52464d3a505cf5a63d6c7d8f7c67c94b4d5964daad386f3e59aab092"} Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.651069 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ttwlm"] Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.652429 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.657618 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.658422 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.678296 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ttwlm"] Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.730917 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.781979 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzf6g\" (UniqueName: \"kubernetes.io/projected/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-kube-api-access-pzf6g\") pod \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.782070 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-combined-ca-bundle\") pod \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.782156 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-run-httpd\") pod \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.782234 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-sg-core-conf-yaml\") pod \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.782311 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-config-data\") pod \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.782366 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-scripts\") pod \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.782402 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-log-httpd\") pod \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\" (UID: \"ae6345a4-b8af-4e5f-a155-02f7d3929aa6\") " Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.783117 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-scripts\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.783179 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-config-data\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.783222 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4pmm\" (UniqueName: \"kubernetes.io/projected/51338fbc-fcb2-458b-9b02-8f7fec515821-kube-api-access-f4pmm\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.783279 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.836834 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ae6345a4-b8af-4e5f-a155-02f7d3929aa6" (UID: "ae6345a4-b8af-4e5f-a155-02f7d3929aa6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.838570 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-scripts" (OuterVolumeSpecName: "scripts") pod "ae6345a4-b8af-4e5f-a155-02f7d3929aa6" (UID: "ae6345a4-b8af-4e5f-a155-02f7d3929aa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.844370 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ae6345a4-b8af-4e5f-a155-02f7d3929aa6" (UID: "ae6345a4-b8af-4e5f-a155-02f7d3929aa6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.852637 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-kube-api-access-pzf6g" (OuterVolumeSpecName: "kube-api-access-pzf6g") pod "ae6345a4-b8af-4e5f-a155-02f7d3929aa6" (UID: "ae6345a4-b8af-4e5f-a155-02f7d3929aa6"). InnerVolumeSpecName "kube-api-access-pzf6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.955784 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-scripts\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.955866 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-config-data\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.955915 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4pmm\" (UniqueName: \"kubernetes.io/projected/51338fbc-fcb2-458b-9b02-8f7fec515821-kube-api-access-f4pmm\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.955970 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.956060 4930 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.956075 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzf6g\" (UniqueName: \"kubernetes.io/projected/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-kube-api-access-pzf6g\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.956090 4930 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.956101 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.965642 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.966523 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ae6345a4-b8af-4e5f-a155-02f7d3929aa6" (UID: "ae6345a4-b8af-4e5f-a155-02f7d3929aa6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.973140 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:18:38 crc kubenswrapper[4930]: E1124 12:18:38.973554 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="sg-core" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.973568 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="sg-core" Nov 24 12:18:38 crc kubenswrapper[4930]: E1124 12:18:38.973583 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="ceilometer-notification-agent" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.973589 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="ceilometer-notification-agent" Nov 24 12:18:38 crc kubenswrapper[4930]: E1124 12:18:38.973600 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="proxy-httpd" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.973605 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="proxy-httpd" Nov 24 12:18:38 crc kubenswrapper[4930]: E1124 12:18:38.973618 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="ceilometer-central-agent" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.973624 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="ceilometer-central-agent" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.973787 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="sg-core" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.973799 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="ceilometer-notification-agent" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.973807 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="ceilometer-central-agent" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.973823 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" containerName="proxy-httpd" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.974435 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.976733 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-scripts\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.987154 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:18:38 crc kubenswrapper[4930]: I1124 12:18:38.992188 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.016199 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4pmm\" (UniqueName: \"kubernetes.io/projected/51338fbc-fcb2-458b-9b02-8f7fec515821-kube-api-access-f4pmm\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.016311 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-config-data\") pod \"nova-cell0-cell-mapping-ttwlm\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.059499 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.060871 4930 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.078554 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae6345a4-b8af-4e5f-a155-02f7d3929aa6" (UID: "ae6345a4-b8af-4e5f-a155-02f7d3929aa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.162893 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.163200 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjkft\" (UniqueName: \"kubernetes.io/projected/01aecd0b-d42b-494d-a6a6-d5294c283a8a-kube-api-access-hjkft\") pod \"nova-scheduler-0\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.163267 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-config-data\") pod \"nova-scheduler-0\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.163343 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.171607 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.173379 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.178042 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.184776 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-config-data" (OuterVolumeSpecName: "config-data") pod "ae6345a4-b8af-4e5f-a155-02f7d3929aa6" (UID: "ae6345a4-b8af-4e5f-a155-02f7d3929aa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.208217 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.209827 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.214628 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.227990 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.244704 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.266344 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.266400 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjkft\" (UniqueName: \"kubernetes.io/projected/01aecd0b-d42b-494d-a6a6-d5294c283a8a-kube-api-access-hjkft\") pod \"nova-scheduler-0\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.266492 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-config-data\") pod \"nova-scheduler-0\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.266596 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6345a4-b8af-4e5f-a155-02f7d3929aa6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.281310 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.288455 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-6gntb"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.290253 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-config-data\") pod \"nova-scheduler-0\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.302365 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.310245 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjkft\" (UniqueName: \"kubernetes.io/projected/01aecd0b-d42b-494d-a6a6-d5294c283a8a-kube-api-access-hjkft\") pod \"nova-scheduler-0\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.321015 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-6gntb"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.329985 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.331481 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.338248 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.347926 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.349138 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.367913 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-logs\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.367971 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-config-data\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.368043 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-config-data\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.368100 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.368170 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpzpm\" (UniqueName: \"kubernetes.io/projected/e696ce68-2834-4538-8662-7c7fb20cc1df-kube-api-access-xpzpm\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.368206 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.368247 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e696ce68-2834-4538-8662-7c7fb20cc1df-logs\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.368294 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r97cc\" (UniqueName: \"kubernetes.io/projected/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-kube-api-access-r97cc\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.470890 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-logs\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471154 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-config-data\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471204 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-config-data\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471232 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471260 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-config\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471279 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv7js\" (UniqueName: \"kubernetes.io/projected/12e7b427-3991-4edb-90e8-b0e33bc251f7-kube-api-access-vv7js\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471294 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471311 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471329 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471364 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471387 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471412 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpzpm\" (UniqueName: \"kubernetes.io/projected/e696ce68-2834-4538-8662-7c7fb20cc1df-kube-api-access-xpzpm\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471414 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-logs\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.471430 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lwsv\" (UniqueName: \"kubernetes.io/projected/a27d93d3-dcd1-44c5-9be2-be70096911e7-kube-api-access-7lwsv\") pod \"nova-cell1-novncproxy-0\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.472512 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.472618 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e696ce68-2834-4538-8662-7c7fb20cc1df-logs\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.472649 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.472718 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r97cc\" (UniqueName: \"kubernetes.io/projected/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-kube-api-access-r97cc\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.474871 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e696ce68-2834-4538-8662-7c7fb20cc1df-logs\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.478890 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.479183 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-config-data\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.481665 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.490997 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-config-data\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.498554 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r97cc\" (UniqueName: \"kubernetes.io/projected/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-kube-api-access-r97cc\") pod \"nova-api-0\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.504096 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpzpm\" (UniqueName: \"kubernetes.io/projected/e696ce68-2834-4538-8662-7c7fb20cc1df-kube-api-access-xpzpm\") pod \"nova-metadata-0\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.511069 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.551500 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.576868 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.577024 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.577057 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-config\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.577092 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv7js\" (UniqueName: \"kubernetes.io/projected/12e7b427-3991-4edb-90e8-b0e33bc251f7-kube-api-access-vv7js\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.577114 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.577139 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.577191 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.577223 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.577260 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lwsv\" (UniqueName: \"kubernetes.io/projected/a27d93d3-dcd1-44c5-9be2-be70096911e7-kube-api-access-7lwsv\") pod \"nova-cell1-novncproxy-0\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.580350 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.580403 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-config\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.581042 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.581259 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.584475 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.585232 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.588178 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.606273 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv7js\" (UniqueName: \"kubernetes.io/projected/12e7b427-3991-4edb-90e8-b0e33bc251f7-kube-api-access-vv7js\") pod \"dnsmasq-dns-5dd7c4987f-6gntb\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.618381 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lwsv\" (UniqueName: \"kubernetes.io/projected/a27d93d3-dcd1-44c5-9be2-be70096911e7-kube-api-access-7lwsv\") pod \"nova-cell1-novncproxy-0\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.663218 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae6345a4-b8af-4e5f-a155-02f7d3929aa6","Type":"ContainerDied","Data":"030905668bdd66b3477dbddce06368d0df13dd5c8bc62d3d832c5fbe4b1b3c89"} Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.663278 4930 scope.go:117] "RemoveContainer" containerID="ddc2c9ee52464d3a505cf5a63d6c7d8f7c67c94b4d5964daad386f3e59aab092" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.663487 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.686466 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.725850 4930 scope.go:117] "RemoveContainer" containerID="602a8a65ceb2ba9cee27649f192de10279d35fc1a1b01c5f47df78e7ff4eafdb" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.726854 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.790529 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.802643 4930 scope.go:117] "RemoveContainer" containerID="c4846387321b1ac0484c6cee4d32d2ffee18cd0e70220c0fb432dfaa72d23c77" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.813195 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.831744 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.840374 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.845057 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.845232 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.870676 4930 scope.go:117] "RemoveContainer" containerID="284981a1397c1cebae549ae8d751c0e651ec91fd6a7d36e6cdbae558569133da" Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.880332 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:39 crc kubenswrapper[4930]: I1124 12:18:39.891601 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ttwlm"] Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:39.998415 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-scripts\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:39.998823 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:39.998856 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-run-httpd\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:39.998918 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-config-data\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:39.998952 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4qmv\" (UniqueName: \"kubernetes.io/projected/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-kube-api-access-j4qmv\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:39.998999 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:39.999168 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-log-httpd\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.036961 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.114448 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.114488 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-run-httpd\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.114525 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-config-data\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.114562 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qmv\" (UniqueName: \"kubernetes.io/projected/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-kube-api-access-j4qmv\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.114588 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.114668 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-log-httpd\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.114707 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-scripts\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.120268 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-run-httpd\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.120509 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-log-httpd\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.122253 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-config-data\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: W1124 12:18:40.123125 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0b920e0_82f3_40da_b8ca_f873a99b2ec2.slice/crio-65cb38935a1ce8a136eb2c7b8c0f2103ae421ea627fada0c5bb8252f94280313 WatchSource:0}: Error finding container 65cb38935a1ce8a136eb2c7b8c0f2103ae421ea627fada0c5bb8252f94280313: Status 404 returned error can't find the container with id 65cb38935a1ce8a136eb2c7b8c0f2103ae421ea627fada0c5bb8252f94280313 Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.133429 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-scripts\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.137832 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6345a4-b8af-4e5f-a155-02f7d3929aa6" path="/var/lib/kubelet/pods/ae6345a4-b8af-4e5f-a155-02f7d3929aa6/volumes" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.139085 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.150315 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.150593 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qmv\" (UniqueName: \"kubernetes.io/projected/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-kube-api-access-j4qmv\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.151265 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.178767 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.189310 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.399475 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-24swx"] Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.417755 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.422298 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.422573 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.465198 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd9n6\" (UniqueName: \"kubernetes.io/projected/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-kube-api-access-qd9n6\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.465705 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.466703 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-config-data\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.466801 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-scripts\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.467073 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-24swx"] Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.551324 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-6gntb"] Nov 24 12:18:40 crc kubenswrapper[4930]: W1124 12:18:40.565255 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12e7b427_3991_4edb_90e8_b0e33bc251f7.slice/crio-c3af0919fc18b2cb5e60258b35f4e1d6d7f10d75878128ed8abc4febc9fd402f WatchSource:0}: Error finding container c3af0919fc18b2cb5e60258b35f4e1d6d7f10d75878128ed8abc4febc9fd402f: Status 404 returned error can't find the container with id c3af0919fc18b2cb5e60258b35f4e1d6d7f10d75878128ed8abc4febc9fd402f Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.568719 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd9n6\" (UniqueName: \"kubernetes.io/projected/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-kube-api-access-qd9n6\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.568786 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.568863 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-config-data\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.568906 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-scripts\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.584361 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-scripts\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.585081 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-config-data\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.599159 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd9n6\" (UniqueName: \"kubernetes.io/projected/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-kube-api-access-qd9n6\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: W1124 12:18:40.608770 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda27d93d3_dcd1_44c5_9be2_be70096911e7.slice/crio-21e2cedafcba0cb7511f2a1b357459cd93c36abacf9db707e4b027fe365ad550 WatchSource:0}: Error finding container 21e2cedafcba0cb7511f2a1b357459cd93c36abacf9db707e4b027fe365ad550: Status 404 returned error can't find the container with id 21e2cedafcba0cb7511f2a1b357459cd93c36abacf9db707e4b027fe365ad550 Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.616028 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-24swx\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.616123 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.687805 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0b920e0-82f3-40da-b8ca-f873a99b2ec2","Type":"ContainerStarted","Data":"65cb38935a1ce8a136eb2c7b8c0f2103ae421ea627fada0c5bb8252f94280313"} Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.689873 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a27d93d3-dcd1-44c5-9be2-be70096911e7","Type":"ContainerStarted","Data":"21e2cedafcba0cb7511f2a1b357459cd93c36abacf9db707e4b027fe365ad550"} Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.693711 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ttwlm" event={"ID":"51338fbc-fcb2-458b-9b02-8f7fec515821","Type":"ContainerStarted","Data":"3a7ed2b94fa6114dd857cba24ec3d5a5f49d0476fda615c89f4c741f72768a45"} Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.696128 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ttwlm" event={"ID":"51338fbc-fcb2-458b-9b02-8f7fec515821","Type":"ContainerStarted","Data":"4add802dfa059c64f7e3139c59bb75658ff863b3299818a552092466d2856f03"} Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.703188 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" event={"ID":"12e7b427-3991-4edb-90e8-b0e33bc251f7","Type":"ContainerStarted","Data":"c3af0919fc18b2cb5e60258b35f4e1d6d7f10d75878128ed8abc4febc9fd402f"} Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.714439 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01aecd0b-d42b-494d-a6a6-d5294c283a8a","Type":"ContainerStarted","Data":"1046f52c6f3e2b6b27cd8671c1610a6df628495fd059bf26bb4d07ee857aef7c"} Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.734517 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ttwlm" podStartSLOduration=2.734494857 podStartE2EDuration="2.734494857s" podCreationTimestamp="2025-11-24 12:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:40.715898192 +0000 UTC m=+1167.330226162" watchObservedRunningTime="2025-11-24 12:18:40.734494857 +0000 UTC m=+1167.348822817" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.735035 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e696ce68-2834-4538-8662-7c7fb20cc1df","Type":"ContainerStarted","Data":"48f66e6bf619e15a2306e3ed19b7aeb885d31b06e686eda8402290126c2d2cee"} Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.743349 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:40 crc kubenswrapper[4930]: I1124 12:18:40.822365 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:18:40 crc kubenswrapper[4930]: W1124 12:18:40.846389 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7df1be6c_f0d2_4f1b_8e68_e0a05d9e0eba.slice/crio-eb2c752d84a13025a708e8c6176c4c4843f14bc2d4e0a4a7e5368f263138be1f WatchSource:0}: Error finding container eb2c752d84a13025a708e8c6176c4c4843f14bc2d4e0a4a7e5368f263138be1f: Status 404 returned error can't find the container with id eb2c752d84a13025a708e8c6176c4c4843f14bc2d4e0a4a7e5368f263138be1f Nov 24 12:18:41 crc kubenswrapper[4930]: I1124 12:18:41.263294 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-24swx"] Nov 24 12:18:41 crc kubenswrapper[4930]: I1124 12:18:41.753269 4930 generic.go:334] "Generic (PLEG): container finished" podID="12e7b427-3991-4edb-90e8-b0e33bc251f7" containerID="142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a" exitCode=0 Nov 24 12:18:41 crc kubenswrapper[4930]: I1124 12:18:41.753315 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" event={"ID":"12e7b427-3991-4edb-90e8-b0e33bc251f7","Type":"ContainerDied","Data":"142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a"} Nov 24 12:18:41 crc kubenswrapper[4930]: I1124 12:18:41.755406 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-24swx" event={"ID":"6eb892b0-86ce-42f6-9c90-8acdb9a90a41","Type":"ContainerStarted","Data":"bc1e1e995ba678bcfea2404ea96a1f998466501417fdbf1929fe713d23a9d9f0"} Nov 24 12:18:41 crc kubenswrapper[4930]: I1124 12:18:41.755448 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-24swx" event={"ID":"6eb892b0-86ce-42f6-9c90-8acdb9a90a41","Type":"ContainerStarted","Data":"6b7eb0904be9065b5c94e0bba4cd8c8370259b9cc66924cd1c614b59a209c047"} Nov 24 12:18:41 crc kubenswrapper[4930]: I1124 12:18:41.759281 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba","Type":"ContainerStarted","Data":"eb2c752d84a13025a708e8c6176c4c4843f14bc2d4e0a4a7e5368f263138be1f"} Nov 24 12:18:42 crc kubenswrapper[4930]: I1124 12:18:42.788420 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-24swx" podStartSLOduration=2.788397577 podStartE2EDuration="2.788397577s" podCreationTimestamp="2025-11-24 12:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:41.834877463 +0000 UTC m=+1168.449205413" watchObservedRunningTime="2025-11-24 12:18:42.788397577 +0000 UTC m=+1169.402725527" Nov 24 12:18:42 crc kubenswrapper[4930]: I1124 12:18:42.801122 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:18:42 crc kubenswrapper[4930]: I1124 12:18:42.811103 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.804154 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a27d93d3-dcd1-44c5-9be2-be70096911e7","Type":"ContainerStarted","Data":"1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739"} Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.804240 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a27d93d3-dcd1-44c5-9be2-be70096911e7" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739" gracePeriod=30 Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.811492 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" event={"ID":"12e7b427-3991-4edb-90e8-b0e33bc251f7","Type":"ContainerStarted","Data":"b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e"} Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.811837 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.813106 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01aecd0b-d42b-494d-a6a6-d5294c283a8a","Type":"ContainerStarted","Data":"20ae1a448d7c19a83afeff29b7e02d7870755d4d01a28e06a268b90df559d3ea"} Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.820987 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e696ce68-2834-4538-8662-7c7fb20cc1df","Type":"ContainerStarted","Data":"99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115"} Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.821039 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e696ce68-2834-4538-8662-7c7fb20cc1df","Type":"ContainerStarted","Data":"2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb"} Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.821140 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e696ce68-2834-4538-8662-7c7fb20cc1df" containerName="nova-metadata-log" containerID="cri-o://2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb" gracePeriod=30 Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.821169 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e696ce68-2834-4538-8662-7c7fb20cc1df" containerName="nova-metadata-metadata" containerID="cri-o://99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115" gracePeriod=30 Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.826159 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.568110945 podStartE2EDuration="5.826139534s" podCreationTimestamp="2025-11-24 12:18:39 +0000 UTC" firstStartedPulling="2025-11-24 12:18:40.624125262 +0000 UTC m=+1167.238453212" lastFinishedPulling="2025-11-24 12:18:43.882153851 +0000 UTC m=+1170.496481801" observedRunningTime="2025-11-24 12:18:44.82322351 +0000 UTC m=+1171.437551460" watchObservedRunningTime="2025-11-24 12:18:44.826139534 +0000 UTC m=+1171.440467484" Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.837116 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0b920e0-82f3-40da-b8ca-f873a99b2ec2","Type":"ContainerStarted","Data":"6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819"} Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.838373 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0b920e0-82f3-40da-b8ca-f873a99b2ec2","Type":"ContainerStarted","Data":"f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c"} Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.857275 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba","Type":"ContainerStarted","Data":"5e1fe972b1ba6b54a2dc70773062582defe9a0330cd004312b20abe15b1f281a"} Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.871335 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.197722516 podStartE2EDuration="5.871315216s" podCreationTimestamp="2025-11-24 12:18:39 +0000 UTC" firstStartedPulling="2025-11-24 12:18:40.20853227 +0000 UTC m=+1166.822860220" lastFinishedPulling="2025-11-24 12:18:43.88212498 +0000 UTC m=+1170.496452920" observedRunningTime="2025-11-24 12:18:44.841555388 +0000 UTC m=+1171.455883338" watchObservedRunningTime="2025-11-24 12:18:44.871315216 +0000 UTC m=+1171.485643166" Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.876463 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" podStartSLOduration=5.876442433 podStartE2EDuration="5.876442433s" podCreationTimestamp="2025-11-24 12:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:44.861782451 +0000 UTC m=+1171.476110411" watchObservedRunningTime="2025-11-24 12:18:44.876442433 +0000 UTC m=+1171.490770383" Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.889326 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.124609463 podStartE2EDuration="6.889306884s" podCreationTimestamp="2025-11-24 12:18:38 +0000 UTC" firstStartedPulling="2025-11-24 12:18:40.117342447 +0000 UTC m=+1166.731670397" lastFinishedPulling="2025-11-24 12:18:43.882039868 +0000 UTC m=+1170.496367818" observedRunningTime="2025-11-24 12:18:44.878440921 +0000 UTC m=+1171.492768871" watchObservedRunningTime="2025-11-24 12:18:44.889306884 +0000 UTC m=+1171.503634834" Nov 24 12:18:44 crc kubenswrapper[4930]: I1124 12:18:44.903057 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.170969666 podStartE2EDuration="5.903036619s" podCreationTimestamp="2025-11-24 12:18:39 +0000 UTC" firstStartedPulling="2025-11-24 12:18:40.149934764 +0000 UTC m=+1166.764262714" lastFinishedPulling="2025-11-24 12:18:43.882001717 +0000 UTC m=+1170.496329667" observedRunningTime="2025-11-24 12:18:44.900126345 +0000 UTC m=+1171.514454295" watchObservedRunningTime="2025-11-24 12:18:44.903036619 +0000 UTC m=+1171.517364569" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.447646 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.579863 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpzpm\" (UniqueName: \"kubernetes.io/projected/e696ce68-2834-4538-8662-7c7fb20cc1df-kube-api-access-xpzpm\") pod \"e696ce68-2834-4538-8662-7c7fb20cc1df\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.579930 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e696ce68-2834-4538-8662-7c7fb20cc1df-logs\") pod \"e696ce68-2834-4538-8662-7c7fb20cc1df\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.580082 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-config-data\") pod \"e696ce68-2834-4538-8662-7c7fb20cc1df\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.580116 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-combined-ca-bundle\") pod \"e696ce68-2834-4538-8662-7c7fb20cc1df\" (UID: \"e696ce68-2834-4538-8662-7c7fb20cc1df\") " Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.580510 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e696ce68-2834-4538-8662-7c7fb20cc1df-logs" (OuterVolumeSpecName: "logs") pod "e696ce68-2834-4538-8662-7c7fb20cc1df" (UID: "e696ce68-2834-4538-8662-7c7fb20cc1df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.580661 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e696ce68-2834-4538-8662-7c7fb20cc1df-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.587943 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e696ce68-2834-4538-8662-7c7fb20cc1df-kube-api-access-xpzpm" (OuterVolumeSpecName: "kube-api-access-xpzpm") pod "e696ce68-2834-4538-8662-7c7fb20cc1df" (UID: "e696ce68-2834-4538-8662-7c7fb20cc1df"). InnerVolumeSpecName "kube-api-access-xpzpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.615587 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-config-data" (OuterVolumeSpecName: "config-data") pod "e696ce68-2834-4538-8662-7c7fb20cc1df" (UID: "e696ce68-2834-4538-8662-7c7fb20cc1df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.634009 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e696ce68-2834-4538-8662-7c7fb20cc1df" (UID: "e696ce68-2834-4538-8662-7c7fb20cc1df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.683741 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpzpm\" (UniqueName: \"kubernetes.io/projected/e696ce68-2834-4538-8662-7c7fb20cc1df-kube-api-access-xpzpm\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.683773 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.683810 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e696ce68-2834-4538-8662-7c7fb20cc1df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.872811 4930 generic.go:334] "Generic (PLEG): container finished" podID="e696ce68-2834-4538-8662-7c7fb20cc1df" containerID="99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115" exitCode=0 Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.872847 4930 generic.go:334] "Generic (PLEG): container finished" podID="e696ce68-2834-4538-8662-7c7fb20cc1df" containerID="2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb" exitCode=143 Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.872899 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e696ce68-2834-4538-8662-7c7fb20cc1df","Type":"ContainerDied","Data":"99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115"} Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.872931 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e696ce68-2834-4538-8662-7c7fb20cc1df","Type":"ContainerDied","Data":"2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb"} Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.872945 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e696ce68-2834-4538-8662-7c7fb20cc1df","Type":"ContainerDied","Data":"48f66e6bf619e15a2306e3ed19b7aeb885d31b06e686eda8402290126c2d2cee"} Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.872960 4930 scope.go:117] "RemoveContainer" containerID="99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.874177 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.880310 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba","Type":"ContainerStarted","Data":"26d4dcd74c4103be93957256f33474b8438f7b26dbe33580b6eb5ccb4b1eefd2"} Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.905004 4930 scope.go:117] "RemoveContainer" containerID="2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.994804 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.998467 4930 scope.go:117] "RemoveContainer" containerID="99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115" Nov 24 12:18:45 crc kubenswrapper[4930]: E1124 12:18:45.998893 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115\": container with ID starting with 99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115 not found: ID does not exist" containerID="99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.998927 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115"} err="failed to get container status \"99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115\": rpc error: code = NotFound desc = could not find container \"99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115\": container with ID starting with 99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115 not found: ID does not exist" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.998948 4930 scope.go:117] "RemoveContainer" containerID="2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb" Nov 24 12:18:45 crc kubenswrapper[4930]: E1124 12:18:45.999259 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb\": container with ID starting with 2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb not found: ID does not exist" containerID="2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.999307 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb"} err="failed to get container status \"2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb\": rpc error: code = NotFound desc = could not find container \"2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb\": container with ID starting with 2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb not found: ID does not exist" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.999337 4930 scope.go:117] "RemoveContainer" containerID="99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.999763 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115"} err="failed to get container status \"99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115\": rpc error: code = NotFound desc = could not find container \"99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115\": container with ID starting with 99e01de1572f951a304e93f4a16b9307c0b5d320818a3ab22c2e4c3c4d850115 not found: ID does not exist" Nov 24 12:18:45 crc kubenswrapper[4930]: I1124 12:18:45.999788 4930 scope.go:117] "RemoveContainer" containerID="2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.000002 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb"} err="failed to get container status \"2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb\": rpc error: code = NotFound desc = could not find container \"2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb\": container with ID starting with 2ccd536399a1caab8891a92143cfc4ba178719f136256d32268d3c63fb0a9beb not found: ID does not exist" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.006550 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.022925 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:46 crc kubenswrapper[4930]: E1124 12:18:46.023584 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e696ce68-2834-4538-8662-7c7fb20cc1df" containerName="nova-metadata-log" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.023605 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e696ce68-2834-4538-8662-7c7fb20cc1df" containerName="nova-metadata-log" Nov 24 12:18:46 crc kubenswrapper[4930]: E1124 12:18:46.023631 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e696ce68-2834-4538-8662-7c7fb20cc1df" containerName="nova-metadata-metadata" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.023640 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e696ce68-2834-4538-8662-7c7fb20cc1df" containerName="nova-metadata-metadata" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.023876 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e696ce68-2834-4538-8662-7c7fb20cc1df" containerName="nova-metadata-log" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.023893 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e696ce68-2834-4538-8662-7c7fb20cc1df" containerName="nova-metadata-metadata" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.031931 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.047295 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.056737 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.116656 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e696ce68-2834-4538-8662-7c7fb20cc1df" path="/var/lib/kubelet/pods/e696ce68-2834-4538-8662-7c7fb20cc1df/volumes" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.122794 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.213795 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-config-data\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.213862 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fccc9d3a-880a-4f7c-8223-757733259250-logs\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.213896 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.214218 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.214525 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dt6t\" (UniqueName: \"kubernetes.io/projected/fccc9d3a-880a-4f7c-8223-757733259250-kube-api-access-7dt6t\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.316605 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dt6t\" (UniqueName: \"kubernetes.io/projected/fccc9d3a-880a-4f7c-8223-757733259250-kube-api-access-7dt6t\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.316725 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-config-data\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.316764 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fccc9d3a-880a-4f7c-8223-757733259250-logs\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.316817 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.316908 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.317669 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fccc9d3a-880a-4f7c-8223-757733259250-logs\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.325366 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.325383 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.325792 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-config-data\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.335047 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dt6t\" (UniqueName: \"kubernetes.io/projected/fccc9d3a-880a-4f7c-8223-757733259250-kube-api-access-7dt6t\") pod \"nova-metadata-0\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.389089 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.829810 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:46 crc kubenswrapper[4930]: W1124 12:18:46.830793 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfccc9d3a_880a_4f7c_8223_757733259250.slice/crio-f850681cc70a2875461ad869f8cfcebd78cc7b7e70186ab671af2f736ec65cc6 WatchSource:0}: Error finding container f850681cc70a2875461ad869f8cfcebd78cc7b7e70186ab671af2f736ec65cc6: Status 404 returned error can't find the container with id f850681cc70a2875461ad869f8cfcebd78cc7b7e70186ab671af2f736ec65cc6 Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.897180 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fccc9d3a-880a-4f7c-8223-757733259250","Type":"ContainerStarted","Data":"f850681cc70a2875461ad869f8cfcebd78cc7b7e70186ab671af2f736ec65cc6"} Nov 24 12:18:46 crc kubenswrapper[4930]: I1124 12:18:46.899238 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba","Type":"ContainerStarted","Data":"963d91f9301ebf59e8f63cd9dfc2be3fc865dd50e7bbac2c90eae0774f0643cf"} Nov 24 12:18:47 crc kubenswrapper[4930]: I1124 12:18:47.917923 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fccc9d3a-880a-4f7c-8223-757733259250","Type":"ContainerStarted","Data":"89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109"} Nov 24 12:18:47 crc kubenswrapper[4930]: I1124 12:18:47.918683 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fccc9d3a-880a-4f7c-8223-757733259250","Type":"ContainerStarted","Data":"a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9"} Nov 24 12:18:47 crc kubenswrapper[4930]: I1124 12:18:47.921202 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba","Type":"ContainerStarted","Data":"0e6c00d3b5462dbbd72a61dcc43693dac3cecb13daf54e979fa645bf82172965"} Nov 24 12:18:47 crc kubenswrapper[4930]: I1124 12:18:47.921843 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:18:47 crc kubenswrapper[4930]: I1124 12:18:47.947824 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.947804237 podStartE2EDuration="2.947804237s" podCreationTimestamp="2025-11-24 12:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:47.938809948 +0000 UTC m=+1174.553137898" watchObservedRunningTime="2025-11-24 12:18:47.947804237 +0000 UTC m=+1174.562132187" Nov 24 12:18:47 crc kubenswrapper[4930]: I1124 12:18:47.980771 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.884810237 podStartE2EDuration="8.980750066s" podCreationTimestamp="2025-11-24 12:18:39 +0000 UTC" firstStartedPulling="2025-11-24 12:18:40.851318956 +0000 UTC m=+1167.465646906" lastFinishedPulling="2025-11-24 12:18:46.947258795 +0000 UTC m=+1173.561586735" observedRunningTime="2025-11-24 12:18:47.95660485 +0000 UTC m=+1174.570932820" watchObservedRunningTime="2025-11-24 12:18:47.980750066 +0000 UTC m=+1174.595078016" Nov 24 12:18:48 crc kubenswrapper[4930]: I1124 12:18:48.931010 4930 generic.go:334] "Generic (PLEG): container finished" podID="51338fbc-fcb2-458b-9b02-8f7fec515821" containerID="3a7ed2b94fa6114dd857cba24ec3d5a5f49d0476fda615c89f4c741f72768a45" exitCode=0 Nov 24 12:18:48 crc kubenswrapper[4930]: I1124 12:18:48.931097 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ttwlm" event={"ID":"51338fbc-fcb2-458b-9b02-8f7fec515821","Type":"ContainerDied","Data":"3a7ed2b94fa6114dd857cba24ec3d5a5f49d0476fda615c89f4c741f72768a45"} Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.351302 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.351684 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.381446 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.511849 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.511912 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.688736 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.728116 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.742649 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-cjwcc"] Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.742889 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" podUID="77b75b6e-7ded-4307-8e62-b15ff18acffe" containerName="dnsmasq-dns" containerID="cri-o://7ed6802db67ed4630c5d4a52cc5bcd91065f68bcf837bad5a238b1e052263046" gracePeriod=10 Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.980490 4930 generic.go:334] "Generic (PLEG): container finished" podID="77b75b6e-7ded-4307-8e62-b15ff18acffe" containerID="7ed6802db67ed4630c5d4a52cc5bcd91065f68bcf837bad5a238b1e052263046" exitCode=0 Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.980673 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" event={"ID":"77b75b6e-7ded-4307-8e62-b15ff18acffe","Type":"ContainerDied","Data":"7ed6802db67ed4630c5d4a52cc5bcd91065f68bcf837bad5a238b1e052263046"} Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.984188 4930 generic.go:334] "Generic (PLEG): container finished" podID="6eb892b0-86ce-42f6-9c90-8acdb9a90a41" containerID="bc1e1e995ba678bcfea2404ea96a1f998466501417fdbf1929fe713d23a9d9f0" exitCode=0 Nov 24 12:18:49 crc kubenswrapper[4930]: I1124 12:18:49.984445 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-24swx" event={"ID":"6eb892b0-86ce-42f6-9c90-8acdb9a90a41","Type":"ContainerDied","Data":"bc1e1e995ba678bcfea2404ea96a1f998466501417fdbf1929fe713d23a9d9f0"} Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.045228 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.544398 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.548858 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.594723 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.594840 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.700789 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-svc\") pod \"77b75b6e-7ded-4307-8e62-b15ff18acffe\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.700838 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4pmm\" (UniqueName: \"kubernetes.io/projected/51338fbc-fcb2-458b-9b02-8f7fec515821-kube-api-access-f4pmm\") pod \"51338fbc-fcb2-458b-9b02-8f7fec515821\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.700897 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-config-data\") pod \"51338fbc-fcb2-458b-9b02-8f7fec515821\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.700929 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzjbv\" (UniqueName: \"kubernetes.io/projected/77b75b6e-7ded-4307-8e62-b15ff18acffe-kube-api-access-qzjbv\") pod \"77b75b6e-7ded-4307-8e62-b15ff18acffe\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.701005 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-combined-ca-bundle\") pod \"51338fbc-fcb2-458b-9b02-8f7fec515821\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.701043 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-config\") pod \"77b75b6e-7ded-4307-8e62-b15ff18acffe\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.701096 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-scripts\") pod \"51338fbc-fcb2-458b-9b02-8f7fec515821\" (UID: \"51338fbc-fcb2-458b-9b02-8f7fec515821\") " Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.701126 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-sb\") pod \"77b75b6e-7ded-4307-8e62-b15ff18acffe\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.701145 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-nb\") pod \"77b75b6e-7ded-4307-8e62-b15ff18acffe\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.701193 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-swift-storage-0\") pod \"77b75b6e-7ded-4307-8e62-b15ff18acffe\" (UID: \"77b75b6e-7ded-4307-8e62-b15ff18acffe\") " Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.708389 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-scripts" (OuterVolumeSpecName: "scripts") pod "51338fbc-fcb2-458b-9b02-8f7fec515821" (UID: "51338fbc-fcb2-458b-9b02-8f7fec515821"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.716338 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51338fbc-fcb2-458b-9b02-8f7fec515821-kube-api-access-f4pmm" (OuterVolumeSpecName: "kube-api-access-f4pmm") pod "51338fbc-fcb2-458b-9b02-8f7fec515821" (UID: "51338fbc-fcb2-458b-9b02-8f7fec515821"). InnerVolumeSpecName "kube-api-access-f4pmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.724741 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b75b6e-7ded-4307-8e62-b15ff18acffe-kube-api-access-qzjbv" (OuterVolumeSpecName: "kube-api-access-qzjbv") pod "77b75b6e-7ded-4307-8e62-b15ff18acffe" (UID: "77b75b6e-7ded-4307-8e62-b15ff18acffe"). InnerVolumeSpecName "kube-api-access-qzjbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.754288 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51338fbc-fcb2-458b-9b02-8f7fec515821" (UID: "51338fbc-fcb2-458b-9b02-8f7fec515821"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.772914 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-config-data" (OuterVolumeSpecName: "config-data") pod "51338fbc-fcb2-458b-9b02-8f7fec515821" (UID: "51338fbc-fcb2-458b-9b02-8f7fec515821"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.775302 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "77b75b6e-7ded-4307-8e62-b15ff18acffe" (UID: "77b75b6e-7ded-4307-8e62-b15ff18acffe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.783145 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "77b75b6e-7ded-4307-8e62-b15ff18acffe" (UID: "77b75b6e-7ded-4307-8e62-b15ff18acffe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.799560 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-config" (OuterVolumeSpecName: "config") pod "77b75b6e-7ded-4307-8e62-b15ff18acffe" (UID: "77b75b6e-7ded-4307-8e62-b15ff18acffe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.804279 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.804309 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4pmm\" (UniqueName: \"kubernetes.io/projected/51338fbc-fcb2-458b-9b02-8f7fec515821-kube-api-access-f4pmm\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.804320 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.804333 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzjbv\" (UniqueName: \"kubernetes.io/projected/77b75b6e-7ded-4307-8e62-b15ff18acffe-kube-api-access-qzjbv\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.804341 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.804349 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.804357 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51338fbc-fcb2-458b-9b02-8f7fec515821-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.804366 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.804780 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "77b75b6e-7ded-4307-8e62-b15ff18acffe" (UID: "77b75b6e-7ded-4307-8e62-b15ff18acffe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.805004 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "77b75b6e-7ded-4307-8e62-b15ff18acffe" (UID: "77b75b6e-7ded-4307-8e62-b15ff18acffe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.905814 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.906101 4930 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77b75b6e-7ded-4307-8e62-b15ff18acffe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.997524 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" event={"ID":"77b75b6e-7ded-4307-8e62-b15ff18acffe","Type":"ContainerDied","Data":"694c66f04102d012f6397a3f6dcec2beec05223c0684690a8d6c15d5edb9e8cc"} Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.997568 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-cjwcc" Nov 24 12:18:50 crc kubenswrapper[4930]: I1124 12:18:50.997608 4930 scope.go:117] "RemoveContainer" containerID="7ed6802db67ed4630c5d4a52cc5bcd91065f68bcf837bad5a238b1e052263046" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.003292 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ttwlm" event={"ID":"51338fbc-fcb2-458b-9b02-8f7fec515821","Type":"ContainerDied","Data":"4add802dfa059c64f7e3139c59bb75658ff863b3299818a552092466d2856f03"} Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.003335 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4add802dfa059c64f7e3139c59bb75658ff863b3299818a552092466d2856f03" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.003366 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ttwlm" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.058598 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-cjwcc"] Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.065608 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-cjwcc"] Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.108743 4930 scope.go:117] "RemoveContainer" containerID="1a5f89f62e5f3d75aad6dcc3396a389246bdf3560cf8da0b3a75d9bf19059856" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.228410 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.228629 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerName="nova-api-log" containerID="cri-o://f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c" gracePeriod=30 Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.228957 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerName="nova-api-api" containerID="cri-o://6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819" gracePeriod=30 Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.332933 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.333228 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fccc9d3a-880a-4f7c-8223-757733259250" containerName="nova-metadata-log" containerID="cri-o://a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9" gracePeriod=30 Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.333844 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fccc9d3a-880a-4f7c-8223-757733259250" containerName="nova-metadata-metadata" containerID="cri-o://89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109" gracePeriod=30 Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.380191 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.390827 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.390910 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.665682 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.832441 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-combined-ca-bundle\") pod \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.832617 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-scripts\") pod \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.832663 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd9n6\" (UniqueName: \"kubernetes.io/projected/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-kube-api-access-qd9n6\") pod \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.832696 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-config-data\") pod \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\" (UID: \"6eb892b0-86ce-42f6-9c90-8acdb9a90a41\") " Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.838963 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-scripts" (OuterVolumeSpecName: "scripts") pod "6eb892b0-86ce-42f6-9c90-8acdb9a90a41" (UID: "6eb892b0-86ce-42f6-9c90-8acdb9a90a41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.843127 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-kube-api-access-qd9n6" (OuterVolumeSpecName: "kube-api-access-qd9n6") pod "6eb892b0-86ce-42f6-9c90-8acdb9a90a41" (UID: "6eb892b0-86ce-42f6-9c90-8acdb9a90a41"). InnerVolumeSpecName "kube-api-access-qd9n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.892591 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-config-data" (OuterVolumeSpecName: "config-data") pod "6eb892b0-86ce-42f6-9c90-8acdb9a90a41" (UID: "6eb892b0-86ce-42f6-9c90-8acdb9a90a41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.897288 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eb892b0-86ce-42f6-9c90-8acdb9a90a41" (UID: "6eb892b0-86ce-42f6-9c90-8acdb9a90a41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.935015 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.935053 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd9n6\" (UniqueName: \"kubernetes.io/projected/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-kube-api-access-qd9n6\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.935064 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.935073 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb892b0-86ce-42f6-9c90-8acdb9a90a41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:51 crc kubenswrapper[4930]: I1124 12:18:51.985040 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.022097 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-24swx" event={"ID":"6eb892b0-86ce-42f6-9c90-8acdb9a90a41","Type":"ContainerDied","Data":"6b7eb0904be9065b5c94e0bba4cd8c8370259b9cc66924cd1c614b59a209c047"} Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.022140 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b7eb0904be9065b5c94e0bba4cd8c8370259b9cc66924cd1c614b59a209c047" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.022191 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-24swx" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.049252 4930 generic.go:334] "Generic (PLEG): container finished" podID="fccc9d3a-880a-4f7c-8223-757733259250" containerID="89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109" exitCode=0 Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.049289 4930 generic.go:334] "Generic (PLEG): container finished" podID="fccc9d3a-880a-4f7c-8223-757733259250" containerID="a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9" exitCode=143 Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.049346 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fccc9d3a-880a-4f7c-8223-757733259250","Type":"ContainerDied","Data":"89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109"} Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.049391 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fccc9d3a-880a-4f7c-8223-757733259250","Type":"ContainerDied","Data":"a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9"} Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.049405 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fccc9d3a-880a-4f7c-8223-757733259250","Type":"ContainerDied","Data":"f850681cc70a2875461ad869f8cfcebd78cc7b7e70186ab671af2f736ec65cc6"} Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.049421 4930 scope.go:117] "RemoveContainer" containerID="89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.049721 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.067794 4930 generic.go:334] "Generic (PLEG): container finished" podID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerID="f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c" exitCode=143 Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.068091 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="01aecd0b-d42b-494d-a6a6-d5294c283a8a" containerName="nova-scheduler-scheduler" containerID="cri-o://20ae1a448d7c19a83afeff29b7e02d7870755d4d01a28e06a268b90df559d3ea" gracePeriod=30 Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.068589 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0b920e0-82f3-40da-b8ca-f873a99b2ec2","Type":"ContainerDied","Data":"f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c"} Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.143763 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b75b6e-7ded-4307-8e62-b15ff18acffe" path="/var/lib/kubelet/pods/77b75b6e-7ded-4307-8e62-b15ff18acffe/volumes" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.146286 4930 scope.go:117] "RemoveContainer" containerID="a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.153227 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-config-data\") pod \"fccc9d3a-880a-4f7c-8223-757733259250\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.153315 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dt6t\" (UniqueName: \"kubernetes.io/projected/fccc9d3a-880a-4f7c-8223-757733259250-kube-api-access-7dt6t\") pod \"fccc9d3a-880a-4f7c-8223-757733259250\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.153351 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-combined-ca-bundle\") pod \"fccc9d3a-880a-4f7c-8223-757733259250\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.153392 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fccc9d3a-880a-4f7c-8223-757733259250-logs\") pod \"fccc9d3a-880a-4f7c-8223-757733259250\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.153686 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-nova-metadata-tls-certs\") pod \"fccc9d3a-880a-4f7c-8223-757733259250\" (UID: \"fccc9d3a-880a-4f7c-8223-757733259250\") " Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.182690 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fccc9d3a-880a-4f7c-8223-757733259250-logs" (OuterVolumeSpecName: "logs") pod "fccc9d3a-880a-4f7c-8223-757733259250" (UID: "fccc9d3a-880a-4f7c-8223-757733259250"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.207108 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fccc9d3a-880a-4f7c-8223-757733259250-kube-api-access-7dt6t" (OuterVolumeSpecName: "kube-api-access-7dt6t") pod "fccc9d3a-880a-4f7c-8223-757733259250" (UID: "fccc9d3a-880a-4f7c-8223-757733259250"). InnerVolumeSpecName "kube-api-access-7dt6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.207174 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 12:18:52 crc kubenswrapper[4930]: E1124 12:18:52.207778 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccc9d3a-880a-4f7c-8223-757733259250" containerName="nova-metadata-log" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.207850 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccc9d3a-880a-4f7c-8223-757733259250" containerName="nova-metadata-log" Nov 24 12:18:52 crc kubenswrapper[4930]: E1124 12:18:52.207933 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51338fbc-fcb2-458b-9b02-8f7fec515821" containerName="nova-manage" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.207941 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="51338fbc-fcb2-458b-9b02-8f7fec515821" containerName="nova-manage" Nov 24 12:18:52 crc kubenswrapper[4930]: E1124 12:18:52.207978 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b75b6e-7ded-4307-8e62-b15ff18acffe" containerName="dnsmasq-dns" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.207987 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b75b6e-7ded-4307-8e62-b15ff18acffe" containerName="dnsmasq-dns" Nov 24 12:18:52 crc kubenswrapper[4930]: E1124 12:18:52.208007 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b75b6e-7ded-4307-8e62-b15ff18acffe" containerName="init" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.208015 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b75b6e-7ded-4307-8e62-b15ff18acffe" containerName="init" Nov 24 12:18:52 crc kubenswrapper[4930]: E1124 12:18:52.208059 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb892b0-86ce-42f6-9c90-8acdb9a90a41" containerName="nova-cell1-conductor-db-sync" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.208069 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb892b0-86ce-42f6-9c90-8acdb9a90a41" containerName="nova-cell1-conductor-db-sync" Nov 24 12:18:52 crc kubenswrapper[4930]: E1124 12:18:52.208104 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccc9d3a-880a-4f7c-8223-757733259250" containerName="nova-metadata-metadata" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.208137 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccc9d3a-880a-4f7c-8223-757733259250" containerName="nova-metadata-metadata" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.208799 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb892b0-86ce-42f6-9c90-8acdb9a90a41" containerName="nova-cell1-conductor-db-sync" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.208856 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="51338fbc-fcb2-458b-9b02-8f7fec515821" containerName="nova-manage" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.208869 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b75b6e-7ded-4307-8e62-b15ff18acffe" containerName="dnsmasq-dns" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.208887 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="fccc9d3a-880a-4f7c-8223-757733259250" containerName="nova-metadata-log" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.208897 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="fccc9d3a-880a-4f7c-8223-757733259250" containerName="nova-metadata-metadata" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.210710 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.214911 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.235966 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.257938 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dt6t\" (UniqueName: \"kubernetes.io/projected/fccc9d3a-880a-4f7c-8223-757733259250-kube-api-access-7dt6t\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.257980 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fccc9d3a-880a-4f7c-8223-757733259250-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.266181 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fccc9d3a-880a-4f7c-8223-757733259250" (UID: "fccc9d3a-880a-4f7c-8223-757733259250"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.266709 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fccc9d3a-880a-4f7c-8223-757733259250" (UID: "fccc9d3a-880a-4f7c-8223-757733259250"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.266823 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-config-data" (OuterVolumeSpecName: "config-data") pod "fccc9d3a-880a-4f7c-8223-757733259250" (UID: "fccc9d3a-880a-4f7c-8223-757733259250"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.360068 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd764c7d-ba7d-4a99-8988-863d9cd6ad03-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cd764c7d-ba7d-4a99-8988-863d9cd6ad03\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.360201 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd764c7d-ba7d-4a99-8988-863d9cd6ad03-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cd764c7d-ba7d-4a99-8988-863d9cd6ad03\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.360253 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tj8m\" (UniqueName: \"kubernetes.io/projected/cd764c7d-ba7d-4a99-8988-863d9cd6ad03-kube-api-access-2tj8m\") pod \"nova-cell1-conductor-0\" (UID: \"cd764c7d-ba7d-4a99-8988-863d9cd6ad03\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.360384 4930 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.360396 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.360406 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fccc9d3a-880a-4f7c-8223-757733259250-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.373295 4930 scope.go:117] "RemoveContainer" containerID="89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109" Nov 24 12:18:52 crc kubenswrapper[4930]: E1124 12:18:52.374074 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109\": container with ID starting with 89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109 not found: ID does not exist" containerID="89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.374133 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109"} err="failed to get container status \"89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109\": rpc error: code = NotFound desc = could not find container \"89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109\": container with ID starting with 89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109 not found: ID does not exist" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.374162 4930 scope.go:117] "RemoveContainer" containerID="a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9" Nov 24 12:18:52 crc kubenswrapper[4930]: E1124 12:18:52.374500 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9\": container with ID starting with a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9 not found: ID does not exist" containerID="a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.374524 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9"} err="failed to get container status \"a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9\": rpc error: code = NotFound desc = could not find container \"a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9\": container with ID starting with a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9 not found: ID does not exist" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.374551 4930 scope.go:117] "RemoveContainer" containerID="89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.374876 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109"} err="failed to get container status \"89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109\": rpc error: code = NotFound desc = could not find container \"89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109\": container with ID starting with 89a81943a92b1a1c0e081ae7eefd766cde9d2fb278fc91a968e2749190282109 not found: ID does not exist" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.374909 4930 scope.go:117] "RemoveContainer" containerID="a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.377205 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9"} err="failed to get container status \"a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9\": rpc error: code = NotFound desc = could not find container \"a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9\": container with ID starting with a9c689653292449654fe2a5cda892d93e1ba6d681e37e9a6be447706f1ad8eb9 not found: ID does not exist" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.396595 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.402768 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.414663 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.416243 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.418952 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.419194 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.429140 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.461692 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd764c7d-ba7d-4a99-8988-863d9cd6ad03-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cd764c7d-ba7d-4a99-8988-863d9cd6ad03\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.462034 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tj8m\" (UniqueName: \"kubernetes.io/projected/cd764c7d-ba7d-4a99-8988-863d9cd6ad03-kube-api-access-2tj8m\") pod \"nova-cell1-conductor-0\" (UID: \"cd764c7d-ba7d-4a99-8988-863d9cd6ad03\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.462197 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd764c7d-ba7d-4a99-8988-863d9cd6ad03-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cd764c7d-ba7d-4a99-8988-863d9cd6ad03\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.465669 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd764c7d-ba7d-4a99-8988-863d9cd6ad03-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cd764c7d-ba7d-4a99-8988-863d9cd6ad03\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.465899 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd764c7d-ba7d-4a99-8988-863d9cd6ad03-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cd764c7d-ba7d-4a99-8988-863d9cd6ad03\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.488458 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tj8m\" (UniqueName: \"kubernetes.io/projected/cd764c7d-ba7d-4a99-8988-863d9cd6ad03-kube-api-access-2tj8m\") pod \"nova-cell1-conductor-0\" (UID: \"cd764c7d-ba7d-4a99-8988-863d9cd6ad03\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.563651 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.563746 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221a9965-f13c-43b6-bf2e-a8fd14acffc9-logs\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.563783 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-config-data\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.564073 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.564244 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nt9s\" (UniqueName: \"kubernetes.io/projected/221a9965-f13c-43b6-bf2e-a8fd14acffc9-kube-api-access-5nt9s\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.666022 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.666368 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nt9s\" (UniqueName: \"kubernetes.io/projected/221a9965-f13c-43b6-bf2e-a8fd14acffc9-kube-api-access-5nt9s\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.666493 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.666635 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221a9965-f13c-43b6-bf2e-a8fd14acffc9-logs\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.666742 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-config-data\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.667332 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221a9965-f13c-43b6-bf2e-a8fd14acffc9-logs\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.670980 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.671360 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-config-data\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.671408 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.677586 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.683216 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nt9s\" (UniqueName: \"kubernetes.io/projected/221a9965-f13c-43b6-bf2e-a8fd14acffc9-kube-api-access-5nt9s\") pod \"nova-metadata-0\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " pod="openstack/nova-metadata-0" Nov 24 12:18:52 crc kubenswrapper[4930]: I1124 12:18:52.735327 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:18:53 crc kubenswrapper[4930]: I1124 12:18:53.235959 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 12:18:53 crc kubenswrapper[4930]: I1124 12:18:53.324977 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:18:54 crc kubenswrapper[4930]: I1124 12:18:54.117506 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fccc9d3a-880a-4f7c-8223-757733259250" path="/var/lib/kubelet/pods/fccc9d3a-880a-4f7c-8223-757733259250/volumes" Nov 24 12:18:54 crc kubenswrapper[4930]: I1124 12:18:54.127568 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"221a9965-f13c-43b6-bf2e-a8fd14acffc9","Type":"ContainerStarted","Data":"ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4"} Nov 24 12:18:54 crc kubenswrapper[4930]: I1124 12:18:54.127674 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"221a9965-f13c-43b6-bf2e-a8fd14acffc9","Type":"ContainerStarted","Data":"663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522"} Nov 24 12:18:54 crc kubenswrapper[4930]: I1124 12:18:54.127689 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"221a9965-f13c-43b6-bf2e-a8fd14acffc9","Type":"ContainerStarted","Data":"b3e944ab18e8e07fcd28aeeae2762ccd2725c4bbb73dce6590701f36faa83b69"} Nov 24 12:18:54 crc kubenswrapper[4930]: I1124 12:18:54.131813 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cd764c7d-ba7d-4a99-8988-863d9cd6ad03","Type":"ContainerStarted","Data":"927261951b3f53673e1aede718e864c11f94f72c2ba0a2e16e055c989ee708ce"} Nov 24 12:18:54 crc kubenswrapper[4930]: I1124 12:18:54.131856 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cd764c7d-ba7d-4a99-8988-863d9cd6ad03","Type":"ContainerStarted","Data":"fcdedc2c0b484790b1e44f24afd7d1d36106f2c30a2f33297360740506403d9f"} Nov 24 12:18:54 crc kubenswrapper[4930]: I1124 12:18:54.131968 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 12:18:54 crc kubenswrapper[4930]: I1124 12:18:54.200669 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.200654478 podStartE2EDuration="2.200654478s" podCreationTimestamp="2025-11-24 12:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:54.179860389 +0000 UTC m=+1180.794188339" watchObservedRunningTime="2025-11-24 12:18:54.200654478 +0000 UTC m=+1180.814982428" Nov 24 12:18:54 crc kubenswrapper[4930]: E1124 12:18:54.352849 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="20ae1a448d7c19a83afeff29b7e02d7870755d4d01a28e06a268b90df559d3ea" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:18:54 crc kubenswrapper[4930]: E1124 12:18:54.354154 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="20ae1a448d7c19a83afeff29b7e02d7870755d4d01a28e06a268b90df559d3ea" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:18:54 crc kubenswrapper[4930]: E1124 12:18:54.355294 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="20ae1a448d7c19a83afeff29b7e02d7870755d4d01a28e06a268b90df559d3ea" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:18:54 crc kubenswrapper[4930]: E1124 12:18:54.355340 4930 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="01aecd0b-d42b-494d-a6a6-d5294c283a8a" containerName="nova-scheduler-scheduler" Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.151093 4930 generic.go:334] "Generic (PLEG): container finished" podID="01aecd0b-d42b-494d-a6a6-d5294c283a8a" containerID="20ae1a448d7c19a83afeff29b7e02d7870755d4d01a28e06a268b90df559d3ea" exitCode=0 Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.151257 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01aecd0b-d42b-494d-a6a6-d5294c283a8a","Type":"ContainerDied","Data":"20ae1a448d7c19a83afeff29b7e02d7870755d4d01a28e06a268b90df559d3ea"} Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.486888 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.507328 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=4.507311564 podStartE2EDuration="4.507311564s" podCreationTimestamp="2025-11-24 12:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:54.199856165 +0000 UTC m=+1180.814184135" watchObservedRunningTime="2025-11-24 12:18:56.507311564 +0000 UTC m=+1183.121639514" Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.551254 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-combined-ca-bundle\") pod \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.551314 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-config-data\") pod \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.551448 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjkft\" (UniqueName: \"kubernetes.io/projected/01aecd0b-d42b-494d-a6a6-d5294c283a8a-kube-api-access-hjkft\") pod \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\" (UID: \"01aecd0b-d42b-494d-a6a6-d5294c283a8a\") " Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.559897 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01aecd0b-d42b-494d-a6a6-d5294c283a8a-kube-api-access-hjkft" (OuterVolumeSpecName: "kube-api-access-hjkft") pod "01aecd0b-d42b-494d-a6a6-d5294c283a8a" (UID: "01aecd0b-d42b-494d-a6a6-d5294c283a8a"). InnerVolumeSpecName "kube-api-access-hjkft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.586726 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01aecd0b-d42b-494d-a6a6-d5294c283a8a" (UID: "01aecd0b-d42b-494d-a6a6-d5294c283a8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.587753 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-config-data" (OuterVolumeSpecName: "config-data") pod "01aecd0b-d42b-494d-a6a6-d5294c283a8a" (UID: "01aecd0b-d42b-494d-a6a6-d5294c283a8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.653743 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.653780 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjkft\" (UniqueName: \"kubernetes.io/projected/01aecd0b-d42b-494d-a6a6-d5294c283a8a-kube-api-access-hjkft\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:56 crc kubenswrapper[4930]: I1124 12:18:56.653795 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01aecd0b-d42b-494d-a6a6-d5294c283a8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.020600 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.162143 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-logs\") pod \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.162597 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-combined-ca-bundle\") pod \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.162675 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-config-data\") pod \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.162806 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r97cc\" (UniqueName: \"kubernetes.io/projected/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-kube-api-access-r97cc\") pod \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\" (UID: \"e0b920e0-82f3-40da-b8ca-f873a99b2ec2\") " Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.162922 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-logs" (OuterVolumeSpecName: "logs") pod "e0b920e0-82f3-40da-b8ca-f873a99b2ec2" (UID: "e0b920e0-82f3-40da-b8ca-f873a99b2ec2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.163251 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01aecd0b-d42b-494d-a6a6-d5294c283a8a","Type":"ContainerDied","Data":"1046f52c6f3e2b6b27cd8671c1610a6df628495fd059bf26bb4d07ee857aef7c"} Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.163268 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.163297 4930 scope.go:117] "RemoveContainer" containerID="20ae1a448d7c19a83afeff29b7e02d7870755d4d01a28e06a268b90df559d3ea" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.163317 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.166804 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-kube-api-access-r97cc" (OuterVolumeSpecName: "kube-api-access-r97cc") pod "e0b920e0-82f3-40da-b8ca-f873a99b2ec2" (UID: "e0b920e0-82f3-40da-b8ca-f873a99b2ec2"). InnerVolumeSpecName "kube-api-access-r97cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.182288 4930 generic.go:334] "Generic (PLEG): container finished" podID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerID="6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819" exitCode=0 Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.182324 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0b920e0-82f3-40da-b8ca-f873a99b2ec2","Type":"ContainerDied","Data":"6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819"} Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.182349 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0b920e0-82f3-40da-b8ca-f873a99b2ec2","Type":"ContainerDied","Data":"65cb38935a1ce8a136eb2c7b8c0f2103ae421ea627fada0c5bb8252f94280313"} Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.182374 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.190717 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-config-data" (OuterVolumeSpecName: "config-data") pod "e0b920e0-82f3-40da-b8ca-f873a99b2ec2" (UID: "e0b920e0-82f3-40da-b8ca-f873a99b2ec2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.191977 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0b920e0-82f3-40da-b8ca-f873a99b2ec2" (UID: "e0b920e0-82f3-40da-b8ca-f873a99b2ec2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.265471 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.265526 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.265563 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r97cc\" (UniqueName: \"kubernetes.io/projected/e0b920e0-82f3-40da-b8ca-f873a99b2ec2-kube-api-access-r97cc\") on node \"crc\" DevicePath \"\"" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.282565 4930 scope.go:117] "RemoveContainer" containerID="6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.285738 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.293812 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.308368 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:18:57 crc kubenswrapper[4930]: E1124 12:18:57.311987 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerName="nova-api-log" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.312038 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerName="nova-api-log" Nov 24 12:18:57 crc kubenswrapper[4930]: E1124 12:18:57.312069 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerName="nova-api-api" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.312076 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerName="nova-api-api" Nov 24 12:18:57 crc kubenswrapper[4930]: E1124 12:18:57.312120 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aecd0b-d42b-494d-a6a6-d5294c283a8a" containerName="nova-scheduler-scheduler" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.312130 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aecd0b-d42b-494d-a6a6-d5294c283a8a" containerName="nova-scheduler-scheduler" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.312404 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerName="nova-api-log" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.312443 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aecd0b-d42b-494d-a6a6-d5294c283a8a" containerName="nova-scheduler-scheduler" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.312454 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" containerName="nova-api-api" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.313249 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.315358 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.315777 4930 scope.go:117] "RemoveContainer" containerID="f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.337677 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.357488 4930 scope.go:117] "RemoveContainer" containerID="6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819" Nov 24 12:18:57 crc kubenswrapper[4930]: E1124 12:18:57.360427 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819\": container with ID starting with 6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819 not found: ID does not exist" containerID="6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.360476 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819"} err="failed to get container status \"6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819\": rpc error: code = NotFound desc = could not find container \"6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819\": container with ID starting with 6c614cd52ae50d8b0b70d3ffad9c18f9fabaf357fc75250b15765f08077b7819 not found: ID does not exist" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.360510 4930 scope.go:117] "RemoveContainer" containerID="f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c" Nov 24 12:18:57 crc kubenswrapper[4930]: E1124 12:18:57.360958 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c\": container with ID starting with f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c not found: ID does not exist" containerID="f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.361069 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c"} err="failed to get container status \"f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c\": rpc error: code = NotFound desc = could not find container \"f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c\": container with ID starting with f7eeb4887fac0731be840e6cdfbbe6cdafac2613ab70427fc9b7989665a5c94c not found: ID does not exist" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.368693 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.368968 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckscv\" (UniqueName: \"kubernetes.io/projected/42832f49-0332-47be-b2d3-072f00d69bb6-kube-api-access-ckscv\") pod \"nova-scheduler-0\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.369130 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-config-data\") pod \"nova-scheduler-0\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.470625 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-config-data\") pod \"nova-scheduler-0\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.470742 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.470791 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckscv\" (UniqueName: \"kubernetes.io/projected/42832f49-0332-47be-b2d3-072f00d69bb6-kube-api-access-ckscv\") pod \"nova-scheduler-0\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.544236 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-config-data\") pod \"nova-scheduler-0\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.575984 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.577107 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckscv\" (UniqueName: \"kubernetes.io/projected/42832f49-0332-47be-b2d3-072f00d69bb6-kube-api-access-ckscv\") pod \"nova-scheduler-0\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.643207 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.670639 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.678326 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.713062 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.714739 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.722280 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.730235 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.736832 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.747588 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.781498 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.781662 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af7c985-eaf7-4e5e-9f63-64fdfa050264-logs\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.781903 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-config-data\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.781991 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65jg9\" (UniqueName: \"kubernetes.io/projected/9af7c985-eaf7-4e5e-9f63-64fdfa050264-kube-api-access-65jg9\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.884120 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af7c985-eaf7-4e5e-9f63-64fdfa050264-logs\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.884276 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-config-data\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.884319 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65jg9\" (UniqueName: \"kubernetes.io/projected/9af7c985-eaf7-4e5e-9f63-64fdfa050264-kube-api-access-65jg9\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.884450 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.886438 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af7c985-eaf7-4e5e-9f63-64fdfa050264-logs\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.891100 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.892191 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-config-data\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:57 crc kubenswrapper[4930]: I1124 12:18:57.903353 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65jg9\" (UniqueName: \"kubernetes.io/projected/9af7c985-eaf7-4e5e-9f63-64fdfa050264-kube-api-access-65jg9\") pod \"nova-api-0\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " pod="openstack/nova-api-0" Nov 24 12:18:58 crc kubenswrapper[4930]: I1124 12:18:58.047734 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:18:58 crc kubenswrapper[4930]: I1124 12:18:58.100158 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01aecd0b-d42b-494d-a6a6-d5294c283a8a" path="/var/lib/kubelet/pods/01aecd0b-d42b-494d-a6a6-d5294c283a8a/volumes" Nov 24 12:18:58 crc kubenswrapper[4930]: I1124 12:18:58.101817 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b920e0-82f3-40da-b8ca-f873a99b2ec2" path="/var/lib/kubelet/pods/e0b920e0-82f3-40da-b8ca-f873a99b2ec2/volumes" Nov 24 12:18:58 crc kubenswrapper[4930]: I1124 12:18:58.159703 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:18:58 crc kubenswrapper[4930]: I1124 12:18:58.211242 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42832f49-0332-47be-b2d3-072f00d69bb6","Type":"ContainerStarted","Data":"2249027c4df70c9e869928eff9c14433364a31b4e361499b31e2df40b068e272"} Nov 24 12:18:58 crc kubenswrapper[4930]: I1124 12:18:58.515199 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:18:58 crc kubenswrapper[4930]: W1124 12:18:58.517805 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af7c985_eaf7_4e5e_9f63_64fdfa050264.slice/crio-99f872a2f1cb2158522495bec8068169c4bdff9dc69c3162ac3a6da06d984539 WatchSource:0}: Error finding container 99f872a2f1cb2158522495bec8068169c4bdff9dc69c3162ac3a6da06d984539: Status 404 returned error can't find the container with id 99f872a2f1cb2158522495bec8068169c4bdff9dc69c3162ac3a6da06d984539 Nov 24 12:18:59 crc kubenswrapper[4930]: I1124 12:18:59.221628 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42832f49-0332-47be-b2d3-072f00d69bb6","Type":"ContainerStarted","Data":"6cfdee810727f134c44e2d86043d6c2e742dd97ca1f1d45c2d784e4f6ac37d89"} Nov 24 12:18:59 crc kubenswrapper[4930]: I1124 12:18:59.223893 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af7c985-eaf7-4e5e-9f63-64fdfa050264","Type":"ContainerStarted","Data":"9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1"} Nov 24 12:18:59 crc kubenswrapper[4930]: I1124 12:18:59.223931 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af7c985-eaf7-4e5e-9f63-64fdfa050264","Type":"ContainerStarted","Data":"e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f"} Nov 24 12:18:59 crc kubenswrapper[4930]: I1124 12:18:59.223944 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af7c985-eaf7-4e5e-9f63-64fdfa050264","Type":"ContainerStarted","Data":"99f872a2f1cb2158522495bec8068169c4bdff9dc69c3162ac3a6da06d984539"} Nov 24 12:18:59 crc kubenswrapper[4930]: I1124 12:18:59.248061 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.248039505 podStartE2EDuration="2.248039505s" podCreationTimestamp="2025-11-24 12:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:59.240883518 +0000 UTC m=+1185.855211468" watchObservedRunningTime="2025-11-24 12:18:59.248039505 +0000 UTC m=+1185.862367455" Nov 24 12:18:59 crc kubenswrapper[4930]: I1124 12:18:59.264299 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.264283322 podStartE2EDuration="2.264283322s" podCreationTimestamp="2025-11-24 12:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:18:59.263988074 +0000 UTC m=+1185.878316024" watchObservedRunningTime="2025-11-24 12:18:59.264283322 +0000 UTC m=+1185.878611272" Nov 24 12:19:02 crc kubenswrapper[4930]: I1124 12:19:02.644107 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 12:19:02 crc kubenswrapper[4930]: I1124 12:19:02.707041 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 12:19:02 crc kubenswrapper[4930]: I1124 12:19:02.736303 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:19:02 crc kubenswrapper[4930]: I1124 12:19:02.736373 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:19:03 crc kubenswrapper[4930]: I1124 12:19:03.750857 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:19:03 crc kubenswrapper[4930]: I1124 12:19:03.750887 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:19:07 crc kubenswrapper[4930]: I1124 12:19:07.644406 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 12:19:07 crc kubenswrapper[4930]: I1124 12:19:07.675873 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 12:19:08 crc kubenswrapper[4930]: I1124 12:19:08.048849 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:19:08 crc kubenswrapper[4930]: I1124 12:19:08.049596 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:19:08 crc kubenswrapper[4930]: I1124 12:19:08.337593 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 12:19:09 crc kubenswrapper[4930]: I1124 12:19:09.131754 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:19:09 crc kubenswrapper[4930]: I1124 12:19:09.131760 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:19:10 crc kubenswrapper[4930]: I1124 12:19:10.193059 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 12:19:12 crc kubenswrapper[4930]: I1124 12:19:12.746210 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:19:12 crc kubenswrapper[4930]: I1124 12:19:12.748387 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:19:12 crc kubenswrapper[4930]: I1124 12:19:12.780056 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:19:13 crc kubenswrapper[4930]: I1124 12:19:13.365884 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:19:13 crc kubenswrapper[4930]: I1124 12:19:13.800862 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:19:13 crc kubenswrapper[4930]: I1124 12:19:13.801121 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7cce9366-d1b8-46ab-8ceb-05f6b71348f1" containerName="kube-state-metrics" containerID="cri-o://3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b" gracePeriod=30 Nov 24 12:19:13 crc kubenswrapper[4930]: I1124 12:19:13.822397 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="7cce9366-d1b8-46ab-8ceb-05f6b71348f1" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": dial tcp 10.217.0.106:8081: connect: connection refused" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.316062 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.368110 4930 generic.go:334] "Generic (PLEG): container finished" podID="7cce9366-d1b8-46ab-8ceb-05f6b71348f1" containerID="3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b" exitCode=2 Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.368205 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.368226 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7cce9366-d1b8-46ab-8ceb-05f6b71348f1","Type":"ContainerDied","Data":"3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b"} Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.368277 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7cce9366-d1b8-46ab-8ceb-05f6b71348f1","Type":"ContainerDied","Data":"7024730a843ea0c90a6f660ccb478c441ddc81867c9611de834d624326224c2f"} Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.368299 4930 scope.go:117] "RemoveContainer" containerID="3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.394430 4930 scope.go:117] "RemoveContainer" containerID="3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b" Nov 24 12:19:14 crc kubenswrapper[4930]: E1124 12:19:14.394930 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b\": container with ID starting with 3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b not found: ID does not exist" containerID="3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.394980 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b"} err="failed to get container status \"3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b\": rpc error: code = NotFound desc = could not find container \"3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b\": container with ID starting with 3a8942f1411aeaac44101a24d6a8c8e7cbf34f00a1e96cee5d028bcae3942f6b not found: ID does not exist" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.415616 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmvfb\" (UniqueName: \"kubernetes.io/projected/7cce9366-d1b8-46ab-8ceb-05f6b71348f1-kube-api-access-jmvfb\") pod \"7cce9366-d1b8-46ab-8ceb-05f6b71348f1\" (UID: \"7cce9366-d1b8-46ab-8ceb-05f6b71348f1\") " Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.421983 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cce9366-d1b8-46ab-8ceb-05f6b71348f1-kube-api-access-jmvfb" (OuterVolumeSpecName: "kube-api-access-jmvfb") pod "7cce9366-d1b8-46ab-8ceb-05f6b71348f1" (UID: "7cce9366-d1b8-46ab-8ceb-05f6b71348f1"). InnerVolumeSpecName "kube-api-access-jmvfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.517347 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmvfb\" (UniqueName: \"kubernetes.io/projected/7cce9366-d1b8-46ab-8ceb-05f6b71348f1-kube-api-access-jmvfb\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.707218 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.716439 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.734413 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:19:14 crc kubenswrapper[4930]: E1124 12:19:14.734906 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cce9366-d1b8-46ab-8ceb-05f6b71348f1" containerName="kube-state-metrics" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.734930 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cce9366-d1b8-46ab-8ceb-05f6b71348f1" containerName="kube-state-metrics" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.735362 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cce9366-d1b8-46ab-8ceb-05f6b71348f1" containerName="kube-state-metrics" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.736197 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.738499 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.739344 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.770728 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.825721 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eb3b2e-759d-46b8-885a-222b5d97e1c6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.825808 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v44j\" (UniqueName: \"kubernetes.io/projected/43eb3b2e-759d-46b8-885a-222b5d97e1c6-kube-api-access-9v44j\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.825979 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/43eb3b2e-759d-46b8-885a-222b5d97e1c6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.826008 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/43eb3b2e-759d-46b8-885a-222b5d97e1c6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.928466 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/43eb3b2e-759d-46b8-885a-222b5d97e1c6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.928849 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eb3b2e-759d-46b8-885a-222b5d97e1c6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.928888 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v44j\" (UniqueName: \"kubernetes.io/projected/43eb3b2e-759d-46b8-885a-222b5d97e1c6-kube-api-access-9v44j\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.929008 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/43eb3b2e-759d-46b8-885a-222b5d97e1c6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.933841 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eb3b2e-759d-46b8-885a-222b5d97e1c6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.933866 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/43eb3b2e-759d-46b8-885a-222b5d97e1c6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.947374 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/43eb3b2e-759d-46b8-885a-222b5d97e1c6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:14 crc kubenswrapper[4930]: I1124 12:19:14.950394 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v44j\" (UniqueName: \"kubernetes.io/projected/43eb3b2e-759d-46b8-885a-222b5d97e1c6-kube-api-access-9v44j\") pod \"kube-state-metrics-0\" (UID: \"43eb3b2e-759d-46b8-885a-222b5d97e1c6\") " pod="openstack/kube-state-metrics-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.062897 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.249421 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.337758 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-config-data\") pod \"a27d93d3-dcd1-44c5-9be2-be70096911e7\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.338171 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-combined-ca-bundle\") pod \"a27d93d3-dcd1-44c5-9be2-be70096911e7\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.338237 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lwsv\" (UniqueName: \"kubernetes.io/projected/a27d93d3-dcd1-44c5-9be2-be70096911e7-kube-api-access-7lwsv\") pod \"a27d93d3-dcd1-44c5-9be2-be70096911e7\" (UID: \"a27d93d3-dcd1-44c5-9be2-be70096911e7\") " Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.341596 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a27d93d3-dcd1-44c5-9be2-be70096911e7-kube-api-access-7lwsv" (OuterVolumeSpecName: "kube-api-access-7lwsv") pod "a27d93d3-dcd1-44c5-9be2-be70096911e7" (UID: "a27d93d3-dcd1-44c5-9be2-be70096911e7"). InnerVolumeSpecName "kube-api-access-7lwsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.364299 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-config-data" (OuterVolumeSpecName: "config-data") pod "a27d93d3-dcd1-44c5-9be2-be70096911e7" (UID: "a27d93d3-dcd1-44c5-9be2-be70096911e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.364955 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a27d93d3-dcd1-44c5-9be2-be70096911e7" (UID: "a27d93d3-dcd1-44c5-9be2-be70096911e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.386934 4930 generic.go:334] "Generic (PLEG): container finished" podID="a27d93d3-dcd1-44c5-9be2-be70096911e7" containerID="1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739" exitCode=137 Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.387001 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a27d93d3-dcd1-44c5-9be2-be70096911e7","Type":"ContainerDied","Data":"1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739"} Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.387028 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a27d93d3-dcd1-44c5-9be2-be70096911e7","Type":"ContainerDied","Data":"21e2cedafcba0cb7511f2a1b357459cd93c36abacf9db707e4b027fe365ad550"} Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.387044 4930 scope.go:117] "RemoveContainer" containerID="1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.387143 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.429686 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.442127 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.442170 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27d93d3-dcd1-44c5-9be2-be70096911e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.442183 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lwsv\" (UniqueName: \"kubernetes.io/projected/a27d93d3-dcd1-44c5-9be2-be70096911e7-kube-api-access-7lwsv\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.442212 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.444971 4930 scope.go:117] "RemoveContainer" containerID="1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739" Nov 24 12:19:15 crc kubenswrapper[4930]: E1124 12:19:15.445591 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739\": container with ID starting with 1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739 not found: ID does not exist" containerID="1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.445640 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739"} err="failed to get container status \"1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739\": rpc error: code = NotFound desc = could not find container \"1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739\": container with ID starting with 1addd5c9aac389f78ff626f82c755c042dae08f223a8f242fc015bccbf06a739 not found: ID does not exist" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.452891 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:19:15 crc kubenswrapper[4930]: E1124 12:19:15.453334 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27d93d3-dcd1-44c5-9be2-be70096911e7" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.453353 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27d93d3-dcd1-44c5-9be2-be70096911e7" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.453560 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="a27d93d3-dcd1-44c5-9be2-be70096911e7" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.454205 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.456640 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.456982 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.460067 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.486035 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.524804 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:19:15 crc kubenswrapper[4930]: W1124 12:19:15.528193 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43eb3b2e_759d_46b8_885a_222b5d97e1c6.slice/crio-aebfc953fe29e94dc93a681431d129955dfc903f3a2da00f05a2a4bcad021020 WatchSource:0}: Error finding container aebfc953fe29e94dc93a681431d129955dfc903f3a2da00f05a2a4bcad021020: Status 404 returned error can't find the container with id aebfc953fe29e94dc93a681431d129955dfc903f3a2da00f05a2a4bcad021020 Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.543658 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnv56\" (UniqueName: \"kubernetes.io/projected/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-kube-api-access-fnv56\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.543898 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.543999 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.544138 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.544203 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.645747 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnv56\" (UniqueName: \"kubernetes.io/projected/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-kube-api-access-fnv56\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.645833 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.645864 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.645896 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.645941 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.649816 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.650084 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.650082 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.654008 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.663576 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnv56\" (UniqueName: \"kubernetes.io/projected/8d796659-c1c3-48aa-94eb-e16a14f8a0c8-kube-api-access-fnv56\") pod \"nova-cell1-novncproxy-0\" (UID: \"8d796659-c1c3-48aa-94eb-e16a14f8a0c8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.732843 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.733297 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="proxy-httpd" containerID="cri-o://0e6c00d3b5462dbbd72a61dcc43693dac3cecb13daf54e979fa645bf82172965" gracePeriod=30 Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.733374 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="sg-core" containerID="cri-o://963d91f9301ebf59e8f63cd9dfc2be3fc865dd50e7bbac2c90eae0774f0643cf" gracePeriod=30 Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.733423 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="ceilometer-notification-agent" containerID="cri-o://26d4dcd74c4103be93957256f33474b8438f7b26dbe33580b6eb5ccb4b1eefd2" gracePeriod=30 Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.733484 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="ceilometer-central-agent" containerID="cri-o://5e1fe972b1ba6b54a2dc70773062582defe9a0330cd004312b20abe15b1f281a" gracePeriod=30 Nov 24 12:19:15 crc kubenswrapper[4930]: I1124 12:19:15.780431 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.105941 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cce9366-d1b8-46ab-8ceb-05f6b71348f1" path="/var/lib/kubelet/pods/7cce9366-d1b8-46ab-8ceb-05f6b71348f1/volumes" Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.106991 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a27d93d3-dcd1-44c5-9be2-be70096911e7" path="/var/lib/kubelet/pods/a27d93d3-dcd1-44c5-9be2-be70096911e7/volumes" Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.108387 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.400533 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"43eb3b2e-759d-46b8-885a-222b5d97e1c6","Type":"ContainerStarted","Data":"aebfc953fe29e94dc93a681431d129955dfc903f3a2da00f05a2a4bcad021020"} Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.402425 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8d796659-c1c3-48aa-94eb-e16a14f8a0c8","Type":"ContainerStarted","Data":"6f68bd4f3f0148cf512bc4bd5ffbed4102c56c65c6b436974635617775b4ff02"} Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.407324 4930 generic.go:334] "Generic (PLEG): container finished" podID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerID="0e6c00d3b5462dbbd72a61dcc43693dac3cecb13daf54e979fa645bf82172965" exitCode=0 Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.407361 4930 generic.go:334] "Generic (PLEG): container finished" podID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerID="963d91f9301ebf59e8f63cd9dfc2be3fc865dd50e7bbac2c90eae0774f0643cf" exitCode=2 Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.407373 4930 generic.go:334] "Generic (PLEG): container finished" podID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerID="5e1fe972b1ba6b54a2dc70773062582defe9a0330cd004312b20abe15b1f281a" exitCode=0 Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.407460 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba","Type":"ContainerDied","Data":"0e6c00d3b5462dbbd72a61dcc43693dac3cecb13daf54e979fa645bf82172965"} Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.407492 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba","Type":"ContainerDied","Data":"963d91f9301ebf59e8f63cd9dfc2be3fc865dd50e7bbac2c90eae0774f0643cf"} Nov 24 12:19:16 crc kubenswrapper[4930]: I1124 12:19:16.407506 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba","Type":"ContainerDied","Data":"5e1fe972b1ba6b54a2dc70773062582defe9a0330cd004312b20abe15b1f281a"} Nov 24 12:19:17 crc kubenswrapper[4930]: I1124 12:19:17.428279 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"43eb3b2e-759d-46b8-885a-222b5d97e1c6","Type":"ContainerStarted","Data":"8f3fe482e770363cf16e54781875190ae31fe52162ce336c88539f278d11c513"} Nov 24 12:19:17 crc kubenswrapper[4930]: I1124 12:19:17.428713 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 12:19:17 crc kubenswrapper[4930]: I1124 12:19:17.429640 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8d796659-c1c3-48aa-94eb-e16a14f8a0c8","Type":"ContainerStarted","Data":"39862c62f578b59627f02294d2c8a7916f7723582bdd4225793378882d629212"} Nov 24 12:19:17 crc kubenswrapper[4930]: I1124 12:19:17.470030 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.465899557 podStartE2EDuration="3.470009272s" podCreationTimestamp="2025-11-24 12:19:14 +0000 UTC" firstStartedPulling="2025-11-24 12:19:15.53080263 +0000 UTC m=+1202.145130580" lastFinishedPulling="2025-11-24 12:19:16.534912355 +0000 UTC m=+1203.149240295" observedRunningTime="2025-11-24 12:19:17.447710989 +0000 UTC m=+1204.062038939" watchObservedRunningTime="2025-11-24 12:19:17.470009272 +0000 UTC m=+1204.084337222" Nov 24 12:19:17 crc kubenswrapper[4930]: I1124 12:19:17.476125 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.476102637 podStartE2EDuration="2.476102637s" podCreationTimestamp="2025-11-24 12:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:19:17.465979175 +0000 UTC m=+1204.080307125" watchObservedRunningTime="2025-11-24 12:19:17.476102637 +0000 UTC m=+1204.090430587" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.051828 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.052753 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.053712 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.065210 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.442234 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.451836 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.606275 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-bvhzv"] Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.608185 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.629677 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-bvhzv"] Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.725614 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc26l\" (UniqueName: \"kubernetes.io/projected/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-kube-api-access-jc26l\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.725915 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.726058 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-config\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.726257 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.726508 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.726608 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.828126 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.828401 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.828444 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc26l\" (UniqueName: \"kubernetes.io/projected/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-kube-api-access-jc26l\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.828482 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.828499 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-config\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.828530 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.829014 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.829267 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.829428 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.829586 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.829662 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-config\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.850978 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc26l\" (UniqueName: \"kubernetes.io/projected/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-kube-api-access-jc26l\") pod \"dnsmasq-dns-5d7f54fb65-bvhzv\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:18 crc kubenswrapper[4930]: I1124 12:19:18.943849 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:19 crc kubenswrapper[4930]: I1124 12:19:19.427237 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-bvhzv"] Nov 24 12:19:19 crc kubenswrapper[4930]: I1124 12:19:19.452659 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" event={"ID":"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac","Type":"ContainerStarted","Data":"a4b840347e97b8f6e2dedab6cce05b641d0307066a226298ae333bb9368f3c20"} Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.464887 4930 generic.go:334] "Generic (PLEG): container finished" podID="176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" containerID="7df6d2b92db9207da316dbe87e6ae0f67d35d54f6b2e9b032f8097c5b9c896e7" exitCode=0 Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.465415 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" event={"ID":"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac","Type":"ContainerDied","Data":"7df6d2b92db9207da316dbe87e6ae0f67d35d54f6b2e9b032f8097c5b9c896e7"} Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.474205 4930 generic.go:334] "Generic (PLEG): container finished" podID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerID="26d4dcd74c4103be93957256f33474b8438f7b26dbe33580b6eb5ccb4b1eefd2" exitCode=0 Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.474247 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba","Type":"ContainerDied","Data":"26d4dcd74c4103be93957256f33474b8438f7b26dbe33580b6eb5ccb4b1eefd2"} Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.734947 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.781665 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.873050 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-sg-core-conf-yaml\") pod \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.873138 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-config-data\") pod \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.873180 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4qmv\" (UniqueName: \"kubernetes.io/projected/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-kube-api-access-j4qmv\") pod \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.873291 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-combined-ca-bundle\") pod \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.873325 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-log-httpd\") pod \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.873420 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-run-httpd\") pod \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.873479 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-scripts\") pod \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\" (UID: \"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba\") " Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.875219 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" (UID: "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.875584 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" (UID: "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.882605 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-scripts" (OuterVolumeSpecName: "scripts") pod "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" (UID: "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.882942 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-kube-api-access-j4qmv" (OuterVolumeSpecName: "kube-api-access-j4qmv") pod "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" (UID: "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba"). InnerVolumeSpecName "kube-api-access-j4qmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.901719 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" (UID: "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.963213 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" (UID: "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.976048 4930 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.976082 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.976094 4930 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.976106 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4qmv\" (UniqueName: \"kubernetes.io/projected/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-kube-api-access-j4qmv\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.976117 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.976128 4930 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:20 crc kubenswrapper[4930]: I1124 12:19:20.991634 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-config-data" (OuterVolumeSpecName: "config-data") pod "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" (UID: "7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.078011 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.485673 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" event={"ID":"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac","Type":"ContainerStarted","Data":"18557a516cf706707ebb3663f6fe5ce1795bc84006acb0a8af4940cf2e9d71b8"} Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.486692 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.489021 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba","Type":"ContainerDied","Data":"eb2c752d84a13025a708e8c6176c4c4843f14bc2d4e0a4a7e5368f263138be1f"} Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.489062 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.489076 4930 scope.go:117] "RemoveContainer" containerID="0e6c00d3b5462dbbd72a61dcc43693dac3cecb13daf54e979fa645bf82172965" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.511335 4930 scope.go:117] "RemoveContainer" containerID="963d91f9301ebf59e8f63cd9dfc2be3fc865dd50e7bbac2c90eae0774f0643cf" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.515997 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" podStartSLOduration=3.515912968 podStartE2EDuration="3.515912968s" podCreationTimestamp="2025-11-24 12:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:19:21.507807295 +0000 UTC m=+1208.122135255" watchObservedRunningTime="2025-11-24 12:19:21.515912968 +0000 UTC m=+1208.130240908" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.529288 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.534702 4930 scope.go:117] "RemoveContainer" containerID="26d4dcd74c4103be93957256f33474b8438f7b26dbe33580b6eb5ccb4b1eefd2" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.538464 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.552270 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:21 crc kubenswrapper[4930]: E1124 12:19:21.552866 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="ceilometer-central-agent" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.552889 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="ceilometer-central-agent" Nov 24 12:19:21 crc kubenswrapper[4930]: E1124 12:19:21.552920 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="proxy-httpd" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.552931 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="proxy-httpd" Nov 24 12:19:21 crc kubenswrapper[4930]: E1124 12:19:21.552941 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="sg-core" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.552949 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="sg-core" Nov 24 12:19:21 crc kubenswrapper[4930]: E1124 12:19:21.553020 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="ceilometer-notification-agent" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.553028 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="ceilometer-notification-agent" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.553278 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="ceilometer-central-agent" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.553297 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="sg-core" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.553310 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="proxy-httpd" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.553330 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" containerName="ceilometer-notification-agent" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.561222 4930 scope.go:117] "RemoveContainer" containerID="5e1fe972b1ba6b54a2dc70773062582defe9a0330cd004312b20abe15b1f281a" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.563606 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.566084 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.566156 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.566256 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.571490 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.691386 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-run-httpd\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.691462 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcxpj\" (UniqueName: \"kubernetes.io/projected/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-kube-api-access-hcxpj\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.691496 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-scripts\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.691563 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.691580 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.691598 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.691626 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-config-data\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.691656 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-log-httpd\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.693911 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.694215 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerName="nova-api-log" containerID="cri-o://e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f" gracePeriod=30 Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.694292 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerName="nova-api-api" containerID="cri-o://9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1" gracePeriod=30 Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.793829 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-run-httpd\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.793925 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcxpj\" (UniqueName: \"kubernetes.io/projected/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-kube-api-access-hcxpj\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.793964 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-scripts\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.793995 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.794010 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.794029 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.794047 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-config-data\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.794068 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-log-httpd\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.794321 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-run-httpd\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.795021 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-log-httpd\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.798618 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.798622 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.798844 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.799077 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-scripts\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.801002 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-config-data\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.811177 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcxpj\" (UniqueName: \"kubernetes.io/projected/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-kube-api-access-hcxpj\") pod \"ceilometer-0\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " pod="openstack/ceilometer-0" Nov 24 12:19:21 crc kubenswrapper[4930]: I1124 12:19:21.927716 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:19:22 crc kubenswrapper[4930]: I1124 12:19:22.027594 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:22 crc kubenswrapper[4930]: I1124 12:19:22.102256 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba" path="/var/lib/kubelet/pods/7df1be6c-f0d2-4f1b-8e68-e0a05d9e0eba/volumes" Nov 24 12:19:22 crc kubenswrapper[4930]: W1124 12:19:22.438845 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1e40006_4e80_4b40_90b1_cb3ecbc1a616.slice/crio-81087a87589343733f8b20c2ab4e4ed8f995f3e3c81d58718640cbbce200c407 WatchSource:0}: Error finding container 81087a87589343733f8b20c2ab4e4ed8f995f3e3c81d58718640cbbce200c407: Status 404 returned error can't find the container with id 81087a87589343733f8b20c2ab4e4ed8f995f3e3c81d58718640cbbce200c407 Nov 24 12:19:22 crc kubenswrapper[4930]: I1124 12:19:22.445027 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:22 crc kubenswrapper[4930]: I1124 12:19:22.499982 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1e40006-4e80-4b40-90b1-cb3ecbc1a616","Type":"ContainerStarted","Data":"81087a87589343733f8b20c2ab4e4ed8f995f3e3c81d58718640cbbce200c407"} Nov 24 12:19:22 crc kubenswrapper[4930]: I1124 12:19:22.501736 4930 generic.go:334] "Generic (PLEG): container finished" podID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerID="e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f" exitCode=143 Nov 24 12:19:22 crc kubenswrapper[4930]: I1124 12:19:22.502686 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af7c985-eaf7-4e5e-9f63-64fdfa050264","Type":"ContainerDied","Data":"e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f"} Nov 24 12:19:24 crc kubenswrapper[4930]: I1124 12:19:24.520455 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1e40006-4e80-4b40-90b1-cb3ecbc1a616","Type":"ContainerStarted","Data":"0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094"} Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.082786 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.305166 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.463369 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-combined-ca-bundle\") pod \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.463752 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af7c985-eaf7-4e5e-9f63-64fdfa050264-logs\") pod \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.463821 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65jg9\" (UniqueName: \"kubernetes.io/projected/9af7c985-eaf7-4e5e-9f63-64fdfa050264-kube-api-access-65jg9\") pod \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.463875 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-config-data\") pod \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\" (UID: \"9af7c985-eaf7-4e5e-9f63-64fdfa050264\") " Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.464647 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af7c985-eaf7-4e5e-9f63-64fdfa050264-logs" (OuterVolumeSpecName: "logs") pod "9af7c985-eaf7-4e5e-9f63-64fdfa050264" (UID: "9af7c985-eaf7-4e5e-9f63-64fdfa050264"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.469759 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af7c985-eaf7-4e5e-9f63-64fdfa050264-kube-api-access-65jg9" (OuterVolumeSpecName: "kube-api-access-65jg9") pod "9af7c985-eaf7-4e5e-9f63-64fdfa050264" (UID: "9af7c985-eaf7-4e5e-9f63-64fdfa050264"). InnerVolumeSpecName "kube-api-access-65jg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.493651 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-config-data" (OuterVolumeSpecName: "config-data") pod "9af7c985-eaf7-4e5e-9f63-64fdfa050264" (UID: "9af7c985-eaf7-4e5e-9f63-64fdfa050264"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.510860 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9af7c985-eaf7-4e5e-9f63-64fdfa050264" (UID: "9af7c985-eaf7-4e5e-9f63-64fdfa050264"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.533907 4930 generic.go:334] "Generic (PLEG): container finished" podID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerID="9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1" exitCode=0 Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.534021 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af7c985-eaf7-4e5e-9f63-64fdfa050264","Type":"ContainerDied","Data":"9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1"} Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.534053 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9af7c985-eaf7-4e5e-9f63-64fdfa050264","Type":"ContainerDied","Data":"99f872a2f1cb2158522495bec8068169c4bdff9dc69c3162ac3a6da06d984539"} Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.534075 4930 scope.go:117] "RemoveContainer" containerID="9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.535015 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.542585 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1e40006-4e80-4b40-90b1-cb3ecbc1a616","Type":"ContainerStarted","Data":"7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5"} Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.567342 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af7c985-eaf7-4e5e-9f63-64fdfa050264-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.567679 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65jg9\" (UniqueName: \"kubernetes.io/projected/9af7c985-eaf7-4e5e-9f63-64fdfa050264-kube-api-access-65jg9\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.567691 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.567702 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af7c985-eaf7-4e5e-9f63-64fdfa050264-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.669275 4930 scope.go:117] "RemoveContainer" containerID="e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.693028 4930 scope.go:117] "RemoveContainer" containerID="9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1" Nov 24 12:19:25 crc kubenswrapper[4930]: E1124 12:19:25.696690 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1\": container with ID starting with 9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1 not found: ID does not exist" containerID="9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.696732 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1"} err="failed to get container status \"9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1\": rpc error: code = NotFound desc = could not find container \"9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1\": container with ID starting with 9ac3bf16388caffa0bf6bee4cc41b00e38dfc216d354f036651b70409cc673c1 not found: ID does not exist" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.696755 4930 scope.go:117] "RemoveContainer" containerID="e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f" Nov 24 12:19:25 crc kubenswrapper[4930]: E1124 12:19:25.697242 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f\": container with ID starting with e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f not found: ID does not exist" containerID="e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.697292 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f"} err="failed to get container status \"e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f\": rpc error: code = NotFound desc = could not find container \"e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f\": container with ID starting with e2e769d88cf03a01c4aa89857aae31163ed667bad7fe7f3aae9d78abeae79a8f not found: ID does not exist" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.701596 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.713917 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.732864 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:25 crc kubenswrapper[4930]: E1124 12:19:25.733251 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerName="nova-api-api" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.733267 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerName="nova-api-api" Nov 24 12:19:25 crc kubenswrapper[4930]: E1124 12:19:25.733307 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerName="nova-api-log" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.733313 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerName="nova-api-log" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.733490 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerName="nova-api-log" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.733521 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" containerName="nova-api-api" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.734476 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.739434 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.739597 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.739635 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.746101 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.781617 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.800183 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.874096 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/917fc1f6-cbc7-4609-bdb1-f942938384e6-logs\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.874159 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-config-data\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.874178 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-public-tls-certs\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.874210 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.874254 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.874300 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29h4s\" (UniqueName: \"kubernetes.io/projected/917fc1f6-cbc7-4609-bdb1-f942938384e6-kube-api-access-29h4s\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.976277 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.976347 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29h4s\" (UniqueName: \"kubernetes.io/projected/917fc1f6-cbc7-4609-bdb1-f942938384e6-kube-api-access-29h4s\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.976441 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/917fc1f6-cbc7-4609-bdb1-f942938384e6-logs\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.976474 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-config-data\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.976492 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-public-tls-certs\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.976521 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.977421 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/917fc1f6-cbc7-4609-bdb1-f942938384e6-logs\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.982013 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.982418 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.983277 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-config-data\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.995069 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29h4s\" (UniqueName: \"kubernetes.io/projected/917fc1f6-cbc7-4609-bdb1-f942938384e6-kube-api-access-29h4s\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:25 crc kubenswrapper[4930]: I1124 12:19:25.995209 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-public-tls-certs\") pod \"nova-api-0\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " pod="openstack/nova-api-0" Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.071794 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.114084 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af7c985-eaf7-4e5e-9f63-64fdfa050264" path="/var/lib/kubelet/pods/9af7c985-eaf7-4e5e-9f63-64fdfa050264/volumes" Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.555428 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1e40006-4e80-4b40-90b1-cb3ecbc1a616","Type":"ContainerStarted","Data":"ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379"} Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.573382 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.592880 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:26 crc kubenswrapper[4930]: W1124 12:19:26.596187 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod917fc1f6_cbc7_4609_bdb1_f942938384e6.slice/crio-55dddb790b074b34d4a2ef80753c415d3fda46a56273c4fd2150ec1ee4896f7e WatchSource:0}: Error finding container 55dddb790b074b34d4a2ef80753c415d3fda46a56273c4fd2150ec1ee4896f7e: Status 404 returned error can't find the container with id 55dddb790b074b34d4a2ef80753c415d3fda46a56273c4fd2150ec1ee4896f7e Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.839391 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4rdst"] Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.840660 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.844311 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.844507 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.854857 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4rdst"] Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.996716 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.996810 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6g9d\" (UniqueName: \"kubernetes.io/projected/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-kube-api-access-r6g9d\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.996875 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-scripts\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:26 crc kubenswrapper[4930]: I1124 12:19:26.996909 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-config-data\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.098886 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6g9d\" (UniqueName: \"kubernetes.io/projected/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-kube-api-access-r6g9d\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.099186 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-scripts\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.099213 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-config-data\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.099307 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.103853 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-scripts\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.104124 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-config-data\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.104722 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.118558 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6g9d\" (UniqueName: \"kubernetes.io/projected/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-kube-api-access-r6g9d\") pod \"nova-cell1-cell-mapping-4rdst\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.160729 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.569042 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"917fc1f6-cbc7-4609-bdb1-f942938384e6","Type":"ContainerStarted","Data":"5cc1bf97adcf98375330d1f214d7f21918443958fd3e279a528f0f410ac10916"} Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.569413 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"917fc1f6-cbc7-4609-bdb1-f942938384e6","Type":"ContainerStarted","Data":"3a9365d71f24490e13d4e3c7913ba2b134de5a8b9d8243783cbacf96132704b0"} Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.569426 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"917fc1f6-cbc7-4609-bdb1-f942938384e6","Type":"ContainerStarted","Data":"55dddb790b074b34d4a2ef80753c415d3fda46a56273c4fd2150ec1ee4896f7e"} Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.587976 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.587959371 podStartE2EDuration="2.587959371s" podCreationTimestamp="2025-11-24 12:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:19:27.586965092 +0000 UTC m=+1214.201293042" watchObservedRunningTime="2025-11-24 12:19:27.587959371 +0000 UTC m=+1214.202287321" Nov 24 12:19:27 crc kubenswrapper[4930]: I1124 12:19:27.678897 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4rdst"] Nov 24 12:19:27 crc kubenswrapper[4930]: W1124 12:19:27.693904 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29c4d14a_c3de_4c3b_a2a8_2148d04821d6.slice/crio-9ecac9972da61e0ddc5e834acf0a2f3babebe6348e01395e7c5fd9de1e557729 WatchSource:0}: Error finding container 9ecac9972da61e0ddc5e834acf0a2f3babebe6348e01395e7c5fd9de1e557729: Status 404 returned error can't find the container with id 9ecac9972da61e0ddc5e834acf0a2f3babebe6348e01395e7c5fd9de1e557729 Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.580404 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4rdst" event={"ID":"29c4d14a-c3de-4c3b-a2a8-2148d04821d6","Type":"ContainerStarted","Data":"a6da17f8192d6a6c47009adffa120eef41a379ca52741dee1c736409030e9825"} Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.580828 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4rdst" event={"ID":"29c4d14a-c3de-4c3b-a2a8-2148d04821d6","Type":"ContainerStarted","Data":"9ecac9972da61e0ddc5e834acf0a2f3babebe6348e01395e7c5fd9de1e557729"} Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.585737 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="ceilometer-central-agent" containerID="cri-o://0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094" gracePeriod=30 Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.585935 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="proxy-httpd" containerID="cri-o://494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f" gracePeriod=30 Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.586017 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="sg-core" containerID="cri-o://ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379" gracePeriod=30 Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.586061 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="ceilometer-notification-agent" containerID="cri-o://7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5" gracePeriod=30 Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.586379 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1e40006-4e80-4b40-90b1-cb3ecbc1a616","Type":"ContainerStarted","Data":"494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f"} Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.586434 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.609650 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4rdst" podStartSLOduration=2.609635211 podStartE2EDuration="2.609635211s" podCreationTimestamp="2025-11-24 12:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:19:28.605011087 +0000 UTC m=+1215.219339037" watchObservedRunningTime="2025-11-24 12:19:28.609635211 +0000 UTC m=+1215.223963161" Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.629919 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.994996634 podStartE2EDuration="7.629896004s" podCreationTimestamp="2025-11-24 12:19:21 +0000 UTC" firstStartedPulling="2025-11-24 12:19:22.445822585 +0000 UTC m=+1209.060150535" lastFinishedPulling="2025-11-24 12:19:28.080721955 +0000 UTC m=+1214.695049905" observedRunningTime="2025-11-24 12:19:28.625018574 +0000 UTC m=+1215.239346524" watchObservedRunningTime="2025-11-24 12:19:28.629896004 +0000 UTC m=+1215.244223954" Nov 24 12:19:28 crc kubenswrapper[4930]: I1124 12:19:28.945721 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.030304 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-6gntb"] Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.030796 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" podUID="12e7b427-3991-4edb-90e8-b0e33bc251f7" containerName="dnsmasq-dns" containerID="cri-o://b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e" gracePeriod=10 Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.485439 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.554516 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-svc\") pod \"12e7b427-3991-4edb-90e8-b0e33bc251f7\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.554598 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-config\") pod \"12e7b427-3991-4edb-90e8-b0e33bc251f7\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.557722 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-swift-storage-0\") pod \"12e7b427-3991-4edb-90e8-b0e33bc251f7\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.557808 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv7js\" (UniqueName: \"kubernetes.io/projected/12e7b427-3991-4edb-90e8-b0e33bc251f7-kube-api-access-vv7js\") pod \"12e7b427-3991-4edb-90e8-b0e33bc251f7\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.557906 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-nb\") pod \"12e7b427-3991-4edb-90e8-b0e33bc251f7\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.557984 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-sb\") pod \"12e7b427-3991-4edb-90e8-b0e33bc251f7\" (UID: \"12e7b427-3991-4edb-90e8-b0e33bc251f7\") " Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.580418 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e7b427-3991-4edb-90e8-b0e33bc251f7-kube-api-access-vv7js" (OuterVolumeSpecName: "kube-api-access-vv7js") pod "12e7b427-3991-4edb-90e8-b0e33bc251f7" (UID: "12e7b427-3991-4edb-90e8-b0e33bc251f7"). InnerVolumeSpecName "kube-api-access-vv7js". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.597159 4930 generic.go:334] "Generic (PLEG): container finished" podID="12e7b427-3991-4edb-90e8-b0e33bc251f7" containerID="b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e" exitCode=0 Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.597226 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" event={"ID":"12e7b427-3991-4edb-90e8-b0e33bc251f7","Type":"ContainerDied","Data":"b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e"} Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.597253 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" event={"ID":"12e7b427-3991-4edb-90e8-b0e33bc251f7","Type":"ContainerDied","Data":"c3af0919fc18b2cb5e60258b35f4e1d6d7f10d75878128ed8abc4febc9fd402f"} Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.597279 4930 scope.go:117] "RemoveContainer" containerID="b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.597424 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-6gntb" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.639322 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "12e7b427-3991-4edb-90e8-b0e33bc251f7" (UID: "12e7b427-3991-4edb-90e8-b0e33bc251f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.639896 4930 generic.go:334] "Generic (PLEG): container finished" podID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerID="494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f" exitCode=0 Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.639938 4930 generic.go:334] "Generic (PLEG): container finished" podID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerID="ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379" exitCode=2 Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.639956 4930 generic.go:334] "Generic (PLEG): container finished" podID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerID="7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5" exitCode=0 Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.639963 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1e40006-4e80-4b40-90b1-cb3ecbc1a616","Type":"ContainerDied","Data":"494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f"} Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.640010 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1e40006-4e80-4b40-90b1-cb3ecbc1a616","Type":"ContainerDied","Data":"ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379"} Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.640020 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1e40006-4e80-4b40-90b1-cb3ecbc1a616","Type":"ContainerDied","Data":"7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5"} Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.654811 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-config" (OuterVolumeSpecName: "config") pod "12e7b427-3991-4edb-90e8-b0e33bc251f7" (UID: "12e7b427-3991-4edb-90e8-b0e33bc251f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.661276 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.661311 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv7js\" (UniqueName: \"kubernetes.io/projected/12e7b427-3991-4edb-90e8-b0e33bc251f7-kube-api-access-vv7js\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.661325 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.670225 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "12e7b427-3991-4edb-90e8-b0e33bc251f7" (UID: "12e7b427-3991-4edb-90e8-b0e33bc251f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.681236 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "12e7b427-3991-4edb-90e8-b0e33bc251f7" (UID: "12e7b427-3991-4edb-90e8-b0e33bc251f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.700200 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "12e7b427-3991-4edb-90e8-b0e33bc251f7" (UID: "12e7b427-3991-4edb-90e8-b0e33bc251f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.762595 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.762629 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.762638 4930 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12e7b427-3991-4edb-90e8-b0e33bc251f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.776662 4930 scope.go:117] "RemoveContainer" containerID="142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.801638 4930 scope.go:117] "RemoveContainer" containerID="b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e" Nov 24 12:19:29 crc kubenswrapper[4930]: E1124 12:19:29.802201 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e\": container with ID starting with b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e not found: ID does not exist" containerID="b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.802263 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e"} err="failed to get container status \"b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e\": rpc error: code = NotFound desc = could not find container \"b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e\": container with ID starting with b6b8226d468ce38822cf012e6ce2cb2cce9a98e44c919a3fbe18bae42a31638e not found: ID does not exist" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.802299 4930 scope.go:117] "RemoveContainer" containerID="142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a" Nov 24 12:19:29 crc kubenswrapper[4930]: E1124 12:19:29.802798 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a\": container with ID starting with 142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a not found: ID does not exist" containerID="142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.802851 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a"} err="failed to get container status \"142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a\": rpc error: code = NotFound desc = could not find container \"142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a\": container with ID starting with 142cda624ec1dfc667fbe3c1c508fc12c723b097c59b7320989fbefe8c5c341a not found: ID does not exist" Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.931396 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-6gntb"] Nov 24 12:19:29 crc kubenswrapper[4930]: I1124 12:19:29.941630 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-6gntb"] Nov 24 12:19:30 crc kubenswrapper[4930]: I1124 12:19:30.097581 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e7b427-3991-4edb-90e8-b0e33bc251f7" path="/var/lib/kubelet/pods/12e7b427-3991-4edb-90e8-b0e33bc251f7/volumes" Nov 24 12:19:31 crc kubenswrapper[4930]: I1124 12:19:31.810210 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:19:31 crc kubenswrapper[4930]: I1124 12:19:31.810601 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.578524 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.679562 4930 generic.go:334] "Generic (PLEG): container finished" podID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerID="0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094" exitCode=0 Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.679609 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1e40006-4e80-4b40-90b1-cb3ecbc1a616","Type":"ContainerDied","Data":"0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094"} Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.679638 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1e40006-4e80-4b40-90b1-cb3ecbc1a616","Type":"ContainerDied","Data":"81087a87589343733f8b20c2ab4e4ed8f995f3e3c81d58718640cbbce200c407"} Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.679657 4930 scope.go:117] "RemoveContainer" containerID="494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.679682 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.712537 4930 scope.go:117] "RemoveContainer" containerID="ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.720177 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-ceilometer-tls-certs\") pod \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.720231 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-combined-ca-bundle\") pod \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.720375 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-sg-core-conf-yaml\") pod \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.720526 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-config-data\") pod \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.720612 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcxpj\" (UniqueName: \"kubernetes.io/projected/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-kube-api-access-hcxpj\") pod \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.720648 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-run-httpd\") pod \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.720684 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-scripts\") pod \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.720728 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-log-httpd\") pod \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\" (UID: \"e1e40006-4e80-4b40-90b1-cb3ecbc1a616\") " Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.721455 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e1e40006-4e80-4b40-90b1-cb3ecbc1a616" (UID: "e1e40006-4e80-4b40-90b1-cb3ecbc1a616"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.721472 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e1e40006-4e80-4b40-90b1-cb3ecbc1a616" (UID: "e1e40006-4e80-4b40-90b1-cb3ecbc1a616"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.727511 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-kube-api-access-hcxpj" (OuterVolumeSpecName: "kube-api-access-hcxpj") pod "e1e40006-4e80-4b40-90b1-cb3ecbc1a616" (UID: "e1e40006-4e80-4b40-90b1-cb3ecbc1a616"). InnerVolumeSpecName "kube-api-access-hcxpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.727579 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-scripts" (OuterVolumeSpecName: "scripts") pod "e1e40006-4e80-4b40-90b1-cb3ecbc1a616" (UID: "e1e40006-4e80-4b40-90b1-cb3ecbc1a616"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.733808 4930 scope.go:117] "RemoveContainer" containerID="7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.767378 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e1e40006-4e80-4b40-90b1-cb3ecbc1a616" (UID: "e1e40006-4e80-4b40-90b1-cb3ecbc1a616"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.788821 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e1e40006-4e80-4b40-90b1-cb3ecbc1a616" (UID: "e1e40006-4e80-4b40-90b1-cb3ecbc1a616"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.808205 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1e40006-4e80-4b40-90b1-cb3ecbc1a616" (UID: "e1e40006-4e80-4b40-90b1-cb3ecbc1a616"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.822919 4930 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.822952 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcxpj\" (UniqueName: \"kubernetes.io/projected/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-kube-api-access-hcxpj\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.822964 4930 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.822973 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.822981 4930 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.822992 4930 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.823003 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.844731 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-config-data" (OuterVolumeSpecName: "config-data") pod "e1e40006-4e80-4b40-90b1-cb3ecbc1a616" (UID: "e1e40006-4e80-4b40-90b1-cb3ecbc1a616"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.868458 4930 scope.go:117] "RemoveContainer" containerID="0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.889356 4930 scope.go:117] "RemoveContainer" containerID="494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f" Nov 24 12:19:32 crc kubenswrapper[4930]: E1124 12:19:32.889759 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f\": container with ID starting with 494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f not found: ID does not exist" containerID="494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.889804 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f"} err="failed to get container status \"494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f\": rpc error: code = NotFound desc = could not find container \"494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f\": container with ID starting with 494b5c607a53d58320c5244edeab6e00a3ce7209928e1b56e167cc8eea85605f not found: ID does not exist" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.889828 4930 scope.go:117] "RemoveContainer" containerID="ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379" Nov 24 12:19:32 crc kubenswrapper[4930]: E1124 12:19:32.890127 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379\": container with ID starting with ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379 not found: ID does not exist" containerID="ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.890159 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379"} err="failed to get container status \"ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379\": rpc error: code = NotFound desc = could not find container \"ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379\": container with ID starting with ba67aeac74ea7fe3e43d6e0ba59e4d40c811f406c9fa669bea18c955877e6379 not found: ID does not exist" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.890179 4930 scope.go:117] "RemoveContainer" containerID="7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5" Nov 24 12:19:32 crc kubenswrapper[4930]: E1124 12:19:32.890494 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5\": container with ID starting with 7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5 not found: ID does not exist" containerID="7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.890516 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5"} err="failed to get container status \"7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5\": rpc error: code = NotFound desc = could not find container \"7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5\": container with ID starting with 7946502da995650aa9826d4088941de4779b881b828f39bc86361c58571e43d5 not found: ID does not exist" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.890531 4930 scope.go:117] "RemoveContainer" containerID="0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094" Nov 24 12:19:32 crc kubenswrapper[4930]: E1124 12:19:32.890920 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094\": container with ID starting with 0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094 not found: ID does not exist" containerID="0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.890939 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094"} err="failed to get container status \"0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094\": rpc error: code = NotFound desc = could not find container \"0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094\": container with ID starting with 0f337e8cf652974c251db3065e7b28a00adbf3abfc6536087734e56bd4f0d094 not found: ID does not exist" Nov 24 12:19:32 crc kubenswrapper[4930]: I1124 12:19:32.928441 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1e40006-4e80-4b40-90b1-cb3ecbc1a616-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.011908 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.020984 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.044275 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:33 crc kubenswrapper[4930]: E1124 12:19:33.044691 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="proxy-httpd" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.044708 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="proxy-httpd" Nov 24 12:19:33 crc kubenswrapper[4930]: E1124 12:19:33.044730 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="ceilometer-central-agent" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.044736 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="ceilometer-central-agent" Nov 24 12:19:33 crc kubenswrapper[4930]: E1124 12:19:33.044750 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="sg-core" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.044757 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="sg-core" Nov 24 12:19:33 crc kubenswrapper[4930]: E1124 12:19:33.044770 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e7b427-3991-4edb-90e8-b0e33bc251f7" containerName="dnsmasq-dns" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.044776 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e7b427-3991-4edb-90e8-b0e33bc251f7" containerName="dnsmasq-dns" Nov 24 12:19:33 crc kubenswrapper[4930]: E1124 12:19:33.044800 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e7b427-3991-4edb-90e8-b0e33bc251f7" containerName="init" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.044806 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e7b427-3991-4edb-90e8-b0e33bc251f7" containerName="init" Nov 24 12:19:33 crc kubenswrapper[4930]: E1124 12:19:33.044817 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="ceilometer-notification-agent" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.044823 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="ceilometer-notification-agent" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.044987 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="sg-core" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.045010 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="ceilometer-central-agent" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.045020 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="proxy-httpd" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.045029 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e7b427-3991-4edb-90e8-b0e33bc251f7" containerName="dnsmasq-dns" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.045045 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" containerName="ceilometer-notification-agent" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.046827 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.050061 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.051092 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.051224 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.057745 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.132213 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfcxx\" (UniqueName: \"kubernetes.io/projected/5163ee34-cf81-4983-a359-1224b73676fe-kube-api-access-mfcxx\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.132598 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.132847 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.132895 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5163ee34-cf81-4983-a359-1224b73676fe-run-httpd\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.132957 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.133164 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-scripts\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.133397 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5163ee34-cf81-4983-a359-1224b73676fe-log-httpd\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.133467 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-config-data\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.235843 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.236755 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5163ee34-cf81-4983-a359-1224b73676fe-run-httpd\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.236793 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5163ee34-cf81-4983-a359-1224b73676fe-run-httpd\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.236869 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.236898 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-scripts\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.237453 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5163ee34-cf81-4983-a359-1224b73676fe-log-httpd\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.237516 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-config-data\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.237622 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfcxx\" (UniqueName: \"kubernetes.io/projected/5163ee34-cf81-4983-a359-1224b73676fe-kube-api-access-mfcxx\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.237771 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.238211 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5163ee34-cf81-4983-a359-1224b73676fe-log-httpd\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.240613 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.240759 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-scripts\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.241203 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.241960 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.245726 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5163ee34-cf81-4983-a359-1224b73676fe-config-data\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.259288 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfcxx\" (UniqueName: \"kubernetes.io/projected/5163ee34-cf81-4983-a359-1224b73676fe-kube-api-access-mfcxx\") pod \"ceilometer-0\" (UID: \"5163ee34-cf81-4983-a359-1224b73676fe\") " pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.365302 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:19:33 crc kubenswrapper[4930]: I1124 12:19:33.832309 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:19:33 crc kubenswrapper[4930]: W1124 12:19:33.852611 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5163ee34_cf81_4983_a359_1224b73676fe.slice/crio-354379d4959cfd4ab36b016390cddf01c2cba26b3a57e6adaf2f15abfe5db242 WatchSource:0}: Error finding container 354379d4959cfd4ab36b016390cddf01c2cba26b3a57e6adaf2f15abfe5db242: Status 404 returned error can't find the container with id 354379d4959cfd4ab36b016390cddf01c2cba26b3a57e6adaf2f15abfe5db242 Nov 24 12:19:34 crc kubenswrapper[4930]: I1124 12:19:34.097526 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e40006-4e80-4b40-90b1-cb3ecbc1a616" path="/var/lib/kubelet/pods/e1e40006-4e80-4b40-90b1-cb3ecbc1a616/volumes" Nov 24 12:19:34 crc kubenswrapper[4930]: I1124 12:19:34.733561 4930 generic.go:334] "Generic (PLEG): container finished" podID="29c4d14a-c3de-4c3b-a2a8-2148d04821d6" containerID="a6da17f8192d6a6c47009adffa120eef41a379ca52741dee1c736409030e9825" exitCode=0 Nov 24 12:19:34 crc kubenswrapper[4930]: I1124 12:19:34.733566 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4rdst" event={"ID":"29c4d14a-c3de-4c3b-a2a8-2148d04821d6","Type":"ContainerDied","Data":"a6da17f8192d6a6c47009adffa120eef41a379ca52741dee1c736409030e9825"} Nov 24 12:19:34 crc kubenswrapper[4930]: I1124 12:19:34.737103 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5163ee34-cf81-4983-a359-1224b73676fe","Type":"ContainerStarted","Data":"aa4936d89f1c72f5fcb6f4a15f39858f65334226f7fef922d57a028d6757170a"} Nov 24 12:19:34 crc kubenswrapper[4930]: I1124 12:19:34.737147 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5163ee34-cf81-4983-a359-1224b73676fe","Type":"ContainerStarted","Data":"354379d4959cfd4ab36b016390cddf01c2cba26b3a57e6adaf2f15abfe5db242"} Nov 24 12:19:35 crc kubenswrapper[4930]: I1124 12:19:35.747604 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5163ee34-cf81-4983-a359-1224b73676fe","Type":"ContainerStarted","Data":"a86205bcdb4945d9801d87218b863ae4c2bb49f56b7a6793df24ab7f171e9b7a"} Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.072601 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.072923 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.170672 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.295106 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-scripts\") pod \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.295181 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-config-data\") pod \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.295371 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6g9d\" (UniqueName: \"kubernetes.io/projected/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-kube-api-access-r6g9d\") pod \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.295400 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-combined-ca-bundle\") pod \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\" (UID: \"29c4d14a-c3de-4c3b-a2a8-2148d04821d6\") " Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.300276 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-scripts" (OuterVolumeSpecName: "scripts") pod "29c4d14a-c3de-4c3b-a2a8-2148d04821d6" (UID: "29c4d14a-c3de-4c3b-a2a8-2148d04821d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.300607 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-kube-api-access-r6g9d" (OuterVolumeSpecName: "kube-api-access-r6g9d") pod "29c4d14a-c3de-4c3b-a2a8-2148d04821d6" (UID: "29c4d14a-c3de-4c3b-a2a8-2148d04821d6"). InnerVolumeSpecName "kube-api-access-r6g9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.332288 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-config-data" (OuterVolumeSpecName: "config-data") pod "29c4d14a-c3de-4c3b-a2a8-2148d04821d6" (UID: "29c4d14a-c3de-4c3b-a2a8-2148d04821d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.339051 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29c4d14a-c3de-4c3b-a2a8-2148d04821d6" (UID: "29c4d14a-c3de-4c3b-a2a8-2148d04821d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.397133 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6g9d\" (UniqueName: \"kubernetes.io/projected/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-kube-api-access-r6g9d\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.397223 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.397242 4930 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.397254 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c4d14a-c3de-4c3b-a2a8-2148d04821d6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.758147 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4rdst" event={"ID":"29c4d14a-c3de-4c3b-a2a8-2148d04821d6","Type":"ContainerDied","Data":"9ecac9972da61e0ddc5e834acf0a2f3babebe6348e01395e7c5fd9de1e557729"} Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.758192 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ecac9972da61e0ddc5e834acf0a2f3babebe6348e01395e7c5fd9de1e557729" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.758170 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4rdst" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.761385 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5163ee34-cf81-4983-a359-1224b73676fe","Type":"ContainerStarted","Data":"1ba9f6efade7e8e85fa35e175b768ef01dd463728222fa0c9a48d814df2e006e"} Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.944584 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.945068 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="42832f49-0332-47be-b2d3-072f00d69bb6" containerName="nova-scheduler-scheduler" containerID="cri-o://6cfdee810727f134c44e2d86043d6c2e742dd97ca1f1d45c2d784e4f6ac37d89" gracePeriod=30 Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.961256 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.961850 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerName="nova-api-api" containerID="cri-o://5cc1bf97adcf98375330d1f214d7f21918443958fd3e279a528f0f410ac10916" gracePeriod=30 Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.961503 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerName="nova-api-log" containerID="cri-o://3a9365d71f24490e13d4e3c7913ba2b134de5a8b9d8243783cbacf96132704b0" gracePeriod=30 Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.966406 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": EOF" Nov 24 12:19:36 crc kubenswrapper[4930]: I1124 12:19:36.966665 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": EOF" Nov 24 12:19:37 crc kubenswrapper[4930]: I1124 12:19:37.032454 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:19:37 crc kubenswrapper[4930]: I1124 12:19:37.032678 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-log" containerID="cri-o://663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522" gracePeriod=30 Nov 24 12:19:37 crc kubenswrapper[4930]: I1124 12:19:37.033094 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-metadata" containerID="cri-o://ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4" gracePeriod=30 Nov 24 12:19:37 crc kubenswrapper[4930]: E1124 12:19:37.646960 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6cfdee810727f134c44e2d86043d6c2e742dd97ca1f1d45c2d784e4f6ac37d89" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:19:37 crc kubenswrapper[4930]: E1124 12:19:37.648727 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6cfdee810727f134c44e2d86043d6c2e742dd97ca1f1d45c2d784e4f6ac37d89" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:19:37 crc kubenswrapper[4930]: E1124 12:19:37.654193 4930 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6cfdee810727f134c44e2d86043d6c2e742dd97ca1f1d45c2d784e4f6ac37d89" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:19:37 crc kubenswrapper[4930]: E1124 12:19:37.654257 4930 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="42832f49-0332-47be-b2d3-072f00d69bb6" containerName="nova-scheduler-scheduler" Nov 24 12:19:37 crc kubenswrapper[4930]: I1124 12:19:37.777554 4930 generic.go:334] "Generic (PLEG): container finished" podID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerID="3a9365d71f24490e13d4e3c7913ba2b134de5a8b9d8243783cbacf96132704b0" exitCode=143 Nov 24 12:19:37 crc kubenswrapper[4930]: I1124 12:19:37.777581 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"917fc1f6-cbc7-4609-bdb1-f942938384e6","Type":"ContainerDied","Data":"3a9365d71f24490e13d4e3c7913ba2b134de5a8b9d8243783cbacf96132704b0"} Nov 24 12:19:37 crc kubenswrapper[4930]: I1124 12:19:37.780398 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5163ee34-cf81-4983-a359-1224b73676fe","Type":"ContainerStarted","Data":"0c1116e8ed8e76b467a06fb1eecd39bca9f23388ab132be05853a9d267d1fdd9"} Nov 24 12:19:37 crc kubenswrapper[4930]: I1124 12:19:37.780548 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:19:37 crc kubenswrapper[4930]: I1124 12:19:37.803312 4930 generic.go:334] "Generic (PLEG): container finished" podID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerID="663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522" exitCode=143 Nov 24 12:19:37 crc kubenswrapper[4930]: I1124 12:19:37.803358 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"221a9965-f13c-43b6-bf2e-a8fd14acffc9","Type":"ContainerDied","Data":"663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522"} Nov 24 12:19:37 crc kubenswrapper[4930]: I1124 12:19:37.815845 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.464390455 podStartE2EDuration="4.815819067s" podCreationTimestamp="2025-11-24 12:19:33 +0000 UTC" firstStartedPulling="2025-11-24 12:19:33.858352927 +0000 UTC m=+1220.472680877" lastFinishedPulling="2025-11-24 12:19:37.209781539 +0000 UTC m=+1223.824109489" observedRunningTime="2025-11-24 12:19:37.80030705 +0000 UTC m=+1224.414635000" watchObservedRunningTime="2025-11-24 12:19:37.815819067 +0000 UTC m=+1224.430147017" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.169391 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:52834->10.217.0.196:8775: read: connection reset by peer" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.171458 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:52836->10.217.0.196:8775: read: connection reset by peer" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.668505 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.780842 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nt9s\" (UniqueName: \"kubernetes.io/projected/221a9965-f13c-43b6-bf2e-a8fd14acffc9-kube-api-access-5nt9s\") pod \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.780908 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-combined-ca-bundle\") pod \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.780966 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-nova-metadata-tls-certs\") pod \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.781029 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-config-data\") pod \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.781093 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221a9965-f13c-43b6-bf2e-a8fd14acffc9-logs\") pod \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\" (UID: \"221a9965-f13c-43b6-bf2e-a8fd14acffc9\") " Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.782062 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221a9965-f13c-43b6-bf2e-a8fd14acffc9-logs" (OuterVolumeSpecName: "logs") pod "221a9965-f13c-43b6-bf2e-a8fd14acffc9" (UID: "221a9965-f13c-43b6-bf2e-a8fd14acffc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.813212 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221a9965-f13c-43b6-bf2e-a8fd14acffc9-kube-api-access-5nt9s" (OuterVolumeSpecName: "kube-api-access-5nt9s") pod "221a9965-f13c-43b6-bf2e-a8fd14acffc9" (UID: "221a9965-f13c-43b6-bf2e-a8fd14acffc9"). InnerVolumeSpecName "kube-api-access-5nt9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.817691 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-config-data" (OuterVolumeSpecName: "config-data") pod "221a9965-f13c-43b6-bf2e-a8fd14acffc9" (UID: "221a9965-f13c-43b6-bf2e-a8fd14acffc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.826756 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "221a9965-f13c-43b6-bf2e-a8fd14acffc9" (UID: "221a9965-f13c-43b6-bf2e-a8fd14acffc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.840354 4930 generic.go:334] "Generic (PLEG): container finished" podID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerID="ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4" exitCode=0 Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.840439 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.840464 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "221a9965-f13c-43b6-bf2e-a8fd14acffc9" (UID: "221a9965-f13c-43b6-bf2e-a8fd14acffc9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.840757 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"221a9965-f13c-43b6-bf2e-a8fd14acffc9","Type":"ContainerDied","Data":"ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4"} Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.841362 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"221a9965-f13c-43b6-bf2e-a8fd14acffc9","Type":"ContainerDied","Data":"b3e944ab18e8e07fcd28aeeae2762ccd2725c4bbb73dce6590701f36faa83b69"} Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.841391 4930 scope.go:117] "RemoveContainer" containerID="ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.883730 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221a9965-f13c-43b6-bf2e-a8fd14acffc9-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.883764 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nt9s\" (UniqueName: \"kubernetes.io/projected/221a9965-f13c-43b6-bf2e-a8fd14acffc9-kube-api-access-5nt9s\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.883776 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.883785 4930 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.883793 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221a9965-f13c-43b6-bf2e-a8fd14acffc9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.932086 4930 scope.go:117] "RemoveContainer" containerID="663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.943094 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.963924 4930 scope.go:117] "RemoveContainer" containerID="ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4" Nov 24 12:19:40 crc kubenswrapper[4930]: E1124 12:19:40.965800 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4\": container with ID starting with ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4 not found: ID does not exist" containerID="ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.965848 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4"} err="failed to get container status \"ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4\": rpc error: code = NotFound desc = could not find container \"ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4\": container with ID starting with ee67d12628de1c0b92a5c6ef7f46f6c9328b509b85283b1e878a92de43351fd4 not found: ID does not exist" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.965876 4930 scope.go:117] "RemoveContainer" containerID="663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522" Nov 24 12:19:40 crc kubenswrapper[4930]: E1124 12:19:40.966318 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522\": container with ID starting with 663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522 not found: ID does not exist" containerID="663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.966385 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522"} err="failed to get container status \"663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522\": rpc error: code = NotFound desc = could not find container \"663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522\": container with ID starting with 663f21bd1d5bed3172cdc49f0899ea1e0cfaeb09d5775de325d6acd33fff3522 not found: ID does not exist" Nov 24 12:19:40 crc kubenswrapper[4930]: I1124 12:19:40.979970 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.007744 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:19:41 crc kubenswrapper[4930]: E1124 12:19:41.008385 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-metadata" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.008407 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-metadata" Nov 24 12:19:41 crc kubenswrapper[4930]: E1124 12:19:41.008447 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-log" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.008456 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-log" Nov 24 12:19:41 crc kubenswrapper[4930]: E1124 12:19:41.008477 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c4d14a-c3de-4c3b-a2a8-2148d04821d6" containerName="nova-manage" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.008486 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c4d14a-c3de-4c3b-a2a8-2148d04821d6" containerName="nova-manage" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.008747 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c4d14a-c3de-4c3b-a2a8-2148d04821d6" containerName="nova-manage" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.008779 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-metadata" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.008801 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" containerName="nova-metadata-log" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.010097 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.012042 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.012890 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.017656 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.097364 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5758b132-d70a-4597-87b7-f172d1e8560a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.097526 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5758b132-d70a-4597-87b7-f172d1e8560a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.097566 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5568\" (UniqueName: \"kubernetes.io/projected/5758b132-d70a-4597-87b7-f172d1e8560a-kube-api-access-w5568\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.097586 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5758b132-d70a-4597-87b7-f172d1e8560a-logs\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.097602 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5758b132-d70a-4597-87b7-f172d1e8560a-config-data\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.199826 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5758b132-d70a-4597-87b7-f172d1e8560a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.199866 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5568\" (UniqueName: \"kubernetes.io/projected/5758b132-d70a-4597-87b7-f172d1e8560a-kube-api-access-w5568\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.199889 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5758b132-d70a-4597-87b7-f172d1e8560a-logs\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.199930 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5758b132-d70a-4597-87b7-f172d1e8560a-config-data\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.199986 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5758b132-d70a-4597-87b7-f172d1e8560a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.201434 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5758b132-d70a-4597-87b7-f172d1e8560a-logs\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.204618 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5758b132-d70a-4597-87b7-f172d1e8560a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.204938 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5758b132-d70a-4597-87b7-f172d1e8560a-config-data\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.207951 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5758b132-d70a-4597-87b7-f172d1e8560a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.218343 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5568\" (UniqueName: \"kubernetes.io/projected/5758b132-d70a-4597-87b7-f172d1e8560a-kube-api-access-w5568\") pod \"nova-metadata-0\" (UID: \"5758b132-d70a-4597-87b7-f172d1e8560a\") " pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.332824 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.841178 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.859841 4930 generic.go:334] "Generic (PLEG): container finished" podID="42832f49-0332-47be-b2d3-072f00d69bb6" containerID="6cfdee810727f134c44e2d86043d6c2e742dd97ca1f1d45c2d784e4f6ac37d89" exitCode=0 Nov 24 12:19:41 crc kubenswrapper[4930]: I1124 12:19:41.859914 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42832f49-0332-47be-b2d3-072f00d69bb6","Type":"ContainerDied","Data":"6cfdee810727f134c44e2d86043d6c2e742dd97ca1f1d45c2d784e4f6ac37d89"} Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.000450 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.098569 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="221a9965-f13c-43b6-bf2e-a8fd14acffc9" path="/var/lib/kubelet/pods/221a9965-f13c-43b6-bf2e-a8fd14acffc9/volumes" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.122143 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckscv\" (UniqueName: \"kubernetes.io/projected/42832f49-0332-47be-b2d3-072f00d69bb6-kube-api-access-ckscv\") pod \"42832f49-0332-47be-b2d3-072f00d69bb6\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.122322 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-combined-ca-bundle\") pod \"42832f49-0332-47be-b2d3-072f00d69bb6\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.122420 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-config-data\") pod \"42832f49-0332-47be-b2d3-072f00d69bb6\" (UID: \"42832f49-0332-47be-b2d3-072f00d69bb6\") " Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.129436 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42832f49-0332-47be-b2d3-072f00d69bb6-kube-api-access-ckscv" (OuterVolumeSpecName: "kube-api-access-ckscv") pod "42832f49-0332-47be-b2d3-072f00d69bb6" (UID: "42832f49-0332-47be-b2d3-072f00d69bb6"). InnerVolumeSpecName "kube-api-access-ckscv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.165631 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42832f49-0332-47be-b2d3-072f00d69bb6" (UID: "42832f49-0332-47be-b2d3-072f00d69bb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.169467 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-config-data" (OuterVolumeSpecName: "config-data") pod "42832f49-0332-47be-b2d3-072f00d69bb6" (UID: "42832f49-0332-47be-b2d3-072f00d69bb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.225345 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.225377 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckscv\" (UniqueName: \"kubernetes.io/projected/42832f49-0332-47be-b2d3-072f00d69bb6-kube-api-access-ckscv\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.225390 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42832f49-0332-47be-b2d3-072f00d69bb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.875703 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"42832f49-0332-47be-b2d3-072f00d69bb6","Type":"ContainerDied","Data":"2249027c4df70c9e869928eff9c14433364a31b4e361499b31e2df40b068e272"} Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.877087 4930 scope.go:117] "RemoveContainer" containerID="6cfdee810727f134c44e2d86043d6c2e742dd97ca1f1d45c2d784e4f6ac37d89" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.877230 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.883621 4930 generic.go:334] "Generic (PLEG): container finished" podID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerID="5cc1bf97adcf98375330d1f214d7f21918443958fd3e279a528f0f410ac10916" exitCode=0 Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.883675 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"917fc1f6-cbc7-4609-bdb1-f942938384e6","Type":"ContainerDied","Data":"5cc1bf97adcf98375330d1f214d7f21918443958fd3e279a528f0f410ac10916"} Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.883700 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"917fc1f6-cbc7-4609-bdb1-f942938384e6","Type":"ContainerDied","Data":"55dddb790b074b34d4a2ef80753c415d3fda46a56273c4fd2150ec1ee4896f7e"} Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.883711 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55dddb790b074b34d4a2ef80753c415d3fda46a56273c4fd2150ec1ee4896f7e" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.885762 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.887584 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5758b132-d70a-4597-87b7-f172d1e8560a","Type":"ContainerStarted","Data":"aefad59fe5de686de0ba76ce1e036bfcba3792b0796c8529e9c059713962ecf3"} Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.887612 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5758b132-d70a-4597-87b7-f172d1e8560a","Type":"ContainerStarted","Data":"2cef38d3a223b8520efa022de42e4328595220ffc481122554688e45d3a10967"} Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.887625 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5758b132-d70a-4597-87b7-f172d1e8560a","Type":"ContainerStarted","Data":"90046eb7272be96fafb3e2b4ea90087e96c62d3a557560348ff1e39248094421"} Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.935927 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.945867 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.956814 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:19:42 crc kubenswrapper[4930]: E1124 12:19:42.957289 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerName="nova-api-api" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.957306 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerName="nova-api-api" Nov 24 12:19:42 crc kubenswrapper[4930]: E1124 12:19:42.957350 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerName="nova-api-log" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.957358 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerName="nova-api-log" Nov 24 12:19:42 crc kubenswrapper[4930]: E1124 12:19:42.957391 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42832f49-0332-47be-b2d3-072f00d69bb6" containerName="nova-scheduler-scheduler" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.957400 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="42832f49-0332-47be-b2d3-072f00d69bb6" containerName="nova-scheduler-scheduler" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.957617 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerName="nova-api-api" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.957642 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="42832f49-0332-47be-b2d3-072f00d69bb6" containerName="nova-scheduler-scheduler" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.957664 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" containerName="nova-api-log" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.958398 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.962466 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.962449831 podStartE2EDuration="2.962449831s" podCreationTimestamp="2025-11-24 12:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:19:42.941151798 +0000 UTC m=+1229.555479748" watchObservedRunningTime="2025-11-24 12:19:42.962449831 +0000 UTC m=+1229.576777781" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.964790 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 12:19:42 crc kubenswrapper[4930]: I1124 12:19:42.983507 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.046653 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29h4s\" (UniqueName: \"kubernetes.io/projected/917fc1f6-cbc7-4609-bdb1-f942938384e6-kube-api-access-29h4s\") pod \"917fc1f6-cbc7-4609-bdb1-f942938384e6\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.046841 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-internal-tls-certs\") pod \"917fc1f6-cbc7-4609-bdb1-f942938384e6\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.047012 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-combined-ca-bundle\") pod \"917fc1f6-cbc7-4609-bdb1-f942938384e6\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.047636 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-config-data\") pod \"917fc1f6-cbc7-4609-bdb1-f942938384e6\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.048058 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-public-tls-certs\") pod \"917fc1f6-cbc7-4609-bdb1-f942938384e6\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.048250 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/917fc1f6-cbc7-4609-bdb1-f942938384e6-logs\") pod \"917fc1f6-cbc7-4609-bdb1-f942938384e6\" (UID: \"917fc1f6-cbc7-4609-bdb1-f942938384e6\") " Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.049379 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917fc1f6-cbc7-4609-bdb1-f942938384e6-logs" (OuterVolumeSpecName: "logs") pod "917fc1f6-cbc7-4609-bdb1-f942938384e6" (UID: "917fc1f6-cbc7-4609-bdb1-f942938384e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.066919 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917fc1f6-cbc7-4609-bdb1-f942938384e6-kube-api-access-29h4s" (OuterVolumeSpecName: "kube-api-access-29h4s") pod "917fc1f6-cbc7-4609-bdb1-f942938384e6" (UID: "917fc1f6-cbc7-4609-bdb1-f942938384e6"). InnerVolumeSpecName "kube-api-access-29h4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.075178 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "917fc1f6-cbc7-4609-bdb1-f942938384e6" (UID: "917fc1f6-cbc7-4609-bdb1-f942938384e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.102905 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-config-data" (OuterVolumeSpecName: "config-data") pod "917fc1f6-cbc7-4609-bdb1-f942938384e6" (UID: "917fc1f6-cbc7-4609-bdb1-f942938384e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.104945 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "917fc1f6-cbc7-4609-bdb1-f942938384e6" (UID: "917fc1f6-cbc7-4609-bdb1-f942938384e6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.111161 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "917fc1f6-cbc7-4609-bdb1-f942938384e6" (UID: "917fc1f6-cbc7-4609-bdb1-f942938384e6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.151343 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec8562f-0cac-4105-9a8e-ba98bf34a944-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7ec8562f-0cac-4105-9a8e-ba98bf34a944\") " pod="openstack/nova-scheduler-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.151695 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vxhc\" (UniqueName: \"kubernetes.io/projected/7ec8562f-0cac-4105-9a8e-ba98bf34a944-kube-api-access-8vxhc\") pod \"nova-scheduler-0\" (UID: \"7ec8562f-0cac-4105-9a8e-ba98bf34a944\") " pod="openstack/nova-scheduler-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.151815 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec8562f-0cac-4105-9a8e-ba98bf34a944-config-data\") pod \"nova-scheduler-0\" (UID: \"7ec8562f-0cac-4105-9a8e-ba98bf34a944\") " pod="openstack/nova-scheduler-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.152174 4930 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.152202 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.152214 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.152226 4930 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/917fc1f6-cbc7-4609-bdb1-f942938384e6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.152236 4930 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/917fc1f6-cbc7-4609-bdb1-f942938384e6-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.152249 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29h4s\" (UniqueName: \"kubernetes.io/projected/917fc1f6-cbc7-4609-bdb1-f942938384e6-kube-api-access-29h4s\") on node \"crc\" DevicePath \"\"" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.254672 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vxhc\" (UniqueName: \"kubernetes.io/projected/7ec8562f-0cac-4105-9a8e-ba98bf34a944-kube-api-access-8vxhc\") pod \"nova-scheduler-0\" (UID: \"7ec8562f-0cac-4105-9a8e-ba98bf34a944\") " pod="openstack/nova-scheduler-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.254733 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec8562f-0cac-4105-9a8e-ba98bf34a944-config-data\") pod \"nova-scheduler-0\" (UID: \"7ec8562f-0cac-4105-9a8e-ba98bf34a944\") " pod="openstack/nova-scheduler-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.255620 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec8562f-0cac-4105-9a8e-ba98bf34a944-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7ec8562f-0cac-4105-9a8e-ba98bf34a944\") " pod="openstack/nova-scheduler-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.259299 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec8562f-0cac-4105-9a8e-ba98bf34a944-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7ec8562f-0cac-4105-9a8e-ba98bf34a944\") " pod="openstack/nova-scheduler-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.272911 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec8562f-0cac-4105-9a8e-ba98bf34a944-config-data\") pod \"nova-scheduler-0\" (UID: \"7ec8562f-0cac-4105-9a8e-ba98bf34a944\") " pod="openstack/nova-scheduler-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.275793 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vxhc\" (UniqueName: \"kubernetes.io/projected/7ec8562f-0cac-4105-9a8e-ba98bf34a944-kube-api-access-8vxhc\") pod \"nova-scheduler-0\" (UID: \"7ec8562f-0cac-4105-9a8e-ba98bf34a944\") " pod="openstack/nova-scheduler-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.278999 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.727917 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:19:43 crc kubenswrapper[4930]: W1124 12:19:43.732785 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec8562f_0cac_4105_9a8e_ba98bf34a944.slice/crio-c691a2d416a0b8495acbe42e0e0a9e3bb2dcd1870c745aeb5444babc71b57ce2 WatchSource:0}: Error finding container c691a2d416a0b8495acbe42e0e0a9e3bb2dcd1870c745aeb5444babc71b57ce2: Status 404 returned error can't find the container with id c691a2d416a0b8495acbe42e0e0a9e3bb2dcd1870c745aeb5444babc71b57ce2 Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.900680 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.900738 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7ec8562f-0cac-4105-9a8e-ba98bf34a944","Type":"ContainerStarted","Data":"c691a2d416a0b8495acbe42e0e0a9e3bb2dcd1870c745aeb5444babc71b57ce2"} Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.939166 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.949919 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.963223 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.964981 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.967495 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.967886 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.968015 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 12:19:43 crc kubenswrapper[4930]: I1124 12:19:43.995514 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.069758 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.069826 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-config-data\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.070194 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.070334 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-logs\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.070420 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5g92\" (UniqueName: \"kubernetes.io/projected/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-kube-api-access-j5g92\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.070680 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.096052 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42832f49-0332-47be-b2d3-072f00d69bb6" path="/var/lib/kubelet/pods/42832f49-0332-47be-b2d3-072f00d69bb6/volumes" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.096694 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="917fc1f6-cbc7-4609-bdb1-f942938384e6" path="/var/lib/kubelet/pods/917fc1f6-cbc7-4609-bdb1-f942938384e6/volumes" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.172424 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.172483 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-logs\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.172516 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5g92\" (UniqueName: \"kubernetes.io/projected/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-kube-api-access-j5g92\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.172597 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.172634 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.172659 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-config-data\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.173188 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-logs\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.178404 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.178400 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.178524 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-config-data\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.189764 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.192616 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5g92\" (UniqueName: \"kubernetes.io/projected/ae96b7cf-94c8-4f24-bc63-3b0a529f09e5-kube-api-access-j5g92\") pod \"nova-api-0\" (UID: \"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5\") " pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.306078 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.753008 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:19:44 crc kubenswrapper[4930]: W1124 12:19:44.755111 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae96b7cf_94c8_4f24_bc63_3b0a529f09e5.slice/crio-93f15a9c9badddf8a6eacde6b5d237320d21f16c77c36a370e6b3d304f7af0d5 WatchSource:0}: Error finding container 93f15a9c9badddf8a6eacde6b5d237320d21f16c77c36a370e6b3d304f7af0d5: Status 404 returned error can't find the container with id 93f15a9c9badddf8a6eacde6b5d237320d21f16c77c36a370e6b3d304f7af0d5 Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.912054 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5","Type":"ContainerStarted","Data":"93f15a9c9badddf8a6eacde6b5d237320d21f16c77c36a370e6b3d304f7af0d5"} Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.913671 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7ec8562f-0cac-4105-9a8e-ba98bf34a944","Type":"ContainerStarted","Data":"49ca583e448a9f3a0b1a0710853ec97146176c39d5d0deff52178ad70faf139c"} Nov 24 12:19:44 crc kubenswrapper[4930]: I1124 12:19:44.934759 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.934735416 podStartE2EDuration="2.934735416s" podCreationTimestamp="2025-11-24 12:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:19:44.933514851 +0000 UTC m=+1231.547842811" watchObservedRunningTime="2025-11-24 12:19:44.934735416 +0000 UTC m=+1231.549063366" Nov 24 12:19:45 crc kubenswrapper[4930]: I1124 12:19:45.928644 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5","Type":"ContainerStarted","Data":"f445402ce9a7f0f809ffa8d7ec6d5d6237d1b9447c1946e95833b5b49af7a10d"} Nov 24 12:19:45 crc kubenswrapper[4930]: I1124 12:19:45.928983 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae96b7cf-94c8-4f24-bc63-3b0a529f09e5","Type":"ContainerStarted","Data":"6f5f2ca35b8b08748605ee03b094be91176255e63c331afcf19cc9284181322e"} Nov 24 12:19:45 crc kubenswrapper[4930]: I1124 12:19:45.947358 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.947336326 podStartE2EDuration="2.947336326s" podCreationTimestamp="2025-11-24 12:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:19:45.945451731 +0000 UTC m=+1232.559779701" watchObservedRunningTime="2025-11-24 12:19:45.947336326 +0000 UTC m=+1232.561664276" Nov 24 12:19:46 crc kubenswrapper[4930]: I1124 12:19:46.333676 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:19:46 crc kubenswrapper[4930]: I1124 12:19:46.334149 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:19:48 crc kubenswrapper[4930]: I1124 12:19:48.280168 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 12:19:51 crc kubenswrapper[4930]: I1124 12:19:51.333681 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:19:51 crc kubenswrapper[4930]: I1124 12:19:51.334270 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:19:52 crc kubenswrapper[4930]: I1124 12:19:52.348936 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5758b132-d70a-4597-87b7-f172d1e8560a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:19:52 crc kubenswrapper[4930]: I1124 12:19:52.348945 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5758b132-d70a-4597-87b7-f172d1e8560a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:19:53 crc kubenswrapper[4930]: I1124 12:19:53.279392 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 12:19:53 crc kubenswrapper[4930]: I1124 12:19:53.306808 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 12:19:54 crc kubenswrapper[4930]: I1124 12:19:54.030757 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 12:19:54 crc kubenswrapper[4930]: I1124 12:19:54.306647 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:19:54 crc kubenswrapper[4930]: I1124 12:19:54.306693 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:19:55 crc kubenswrapper[4930]: I1124 12:19:55.319830 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae96b7cf-94c8-4f24-bc63-3b0a529f09e5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:19:55 crc kubenswrapper[4930]: I1124 12:19:55.320482 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae96b7cf-94c8-4f24-bc63-3b0a529f09e5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:20:01 crc kubenswrapper[4930]: I1124 12:20:01.338899 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:20:01 crc kubenswrapper[4930]: I1124 12:20:01.340636 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:20:01 crc kubenswrapper[4930]: I1124 12:20:01.343148 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:20:01 crc kubenswrapper[4930]: I1124 12:20:01.809522 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:20:01 crc kubenswrapper[4930]: I1124 12:20:01.809735 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:20:02 crc kubenswrapper[4930]: I1124 12:20:02.080223 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:20:03 crc kubenswrapper[4930]: I1124 12:20:03.374498 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 12:20:04 crc kubenswrapper[4930]: I1124 12:20:04.315151 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:20:04 crc kubenswrapper[4930]: I1124 12:20:04.315261 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:20:04 crc kubenswrapper[4930]: I1124 12:20:04.315752 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:20:04 crc kubenswrapper[4930]: I1124 12:20:04.315793 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:20:04 crc kubenswrapper[4930]: I1124 12:20:04.322630 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:20:04 crc kubenswrapper[4930]: I1124 12:20:04.327391 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:20:12 crc kubenswrapper[4930]: I1124 12:20:12.876855 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:20:13 crc kubenswrapper[4930]: I1124 12:20:13.631951 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:20:17 crc kubenswrapper[4930]: I1124 12:20:17.264018 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="270a64e1-2837-47ac-860f-d616efdc6bbc" containerName="rabbitmq" containerID="cri-o://a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf" gracePeriod=604796 Nov 24 12:20:17 crc kubenswrapper[4930]: I1124 12:20:17.677450 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d35e6340-889e-4150-90c7-059417befffd" containerName="rabbitmq" containerID="cri-o://2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d" gracePeriod=604796 Nov 24 12:20:17 crc kubenswrapper[4930]: I1124 12:20:17.682955 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d35e6340-889e-4150-90c7-059417befffd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 24 12:20:18 crc kubenswrapper[4930]: I1124 12:20:18.396608 4930 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="270a64e1-2837-47ac-860f-d616efdc6bbc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 24 12:20:23 crc kubenswrapper[4930]: I1124 12:20:23.979065 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.132683 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-plugins\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.132723 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.132761 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-confd\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.132811 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zsxm\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-kube-api-access-9zsxm\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.132835 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-plugins-conf\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.132887 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-config-data\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.132944 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-tls\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.133013 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-erlang-cookie\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.133067 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/270a64e1-2837-47ac-860f-d616efdc6bbc-erlang-cookie-secret\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.133092 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-server-conf\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.133144 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/270a64e1-2837-47ac-860f-d616efdc6bbc-pod-info\") pod \"270a64e1-2837-47ac-860f-d616efdc6bbc\" (UID: \"270a64e1-2837-47ac-860f-d616efdc6bbc\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.133164 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.133613 4930 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.134028 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.134137 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.143234 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/270a64e1-2837-47ac-860f-d616efdc6bbc-pod-info" (OuterVolumeSpecName: "pod-info") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.143639 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.146347 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270a64e1-2837-47ac-860f-d616efdc6bbc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.155731 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.192050 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-kube-api-access-9zsxm" (OuterVolumeSpecName: "kube-api-access-9zsxm") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "kube-api-access-9zsxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.203331 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-config-data" (OuterVolumeSpecName: "config-data") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.237592 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.237907 4930 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.237919 4930 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.237929 4930 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/270a64e1-2837-47ac-860f-d616efdc6bbc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.237937 4930 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/270a64e1-2837-47ac-860f-d616efdc6bbc-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.237966 4930 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.237976 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zsxm\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-kube-api-access-9zsxm\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.237984 4930 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.250181 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-server-conf" (OuterVolumeSpecName: "server-conf") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.268736 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.278266 4930 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.317730 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "270a64e1-2837-47ac-860f-d616efdc6bbc" (UID: "270a64e1-2837-47ac-860f-d616efdc6bbc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.329584 4930 generic.go:334] "Generic (PLEG): container finished" podID="d35e6340-889e-4150-90c7-059417befffd" containerID="2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d" exitCode=0 Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.329716 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.330505 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d35e6340-889e-4150-90c7-059417befffd","Type":"ContainerDied","Data":"2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d"} Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.330566 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d35e6340-889e-4150-90c7-059417befffd","Type":"ContainerDied","Data":"ac084088b65d3dff4cdccbae0f3337d5962164e31efd9fe0b91c54047cc39773"} Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.330589 4930 scope.go:117] "RemoveContainer" containerID="2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.336935 4930 generic.go:334] "Generic (PLEG): container finished" podID="270a64e1-2837-47ac-860f-d616efdc6bbc" containerID="a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf" exitCode=0 Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.336973 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"270a64e1-2837-47ac-860f-d616efdc6bbc","Type":"ContainerDied","Data":"a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf"} Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.337020 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"270a64e1-2837-47ac-860f-d616efdc6bbc","Type":"ContainerDied","Data":"fcedf2cd3937e11e0a8aee329b99961336141df5df74c53e05396f9a0a658b44"} Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.337053 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.340915 4930 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.340951 4930 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/270a64e1-2837-47ac-860f-d616efdc6bbc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.340966 4930 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/270a64e1-2837-47ac-860f-d616efdc6bbc-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.376922 4930 scope.go:117] "RemoveContainer" containerID="0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.394496 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.420967 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.425788 4930 scope.go:117] "RemoveContainer" containerID="2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d" Nov 24 12:20:24 crc kubenswrapper[4930]: E1124 12:20:24.429174 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d\": container with ID starting with 2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d not found: ID does not exist" containerID="2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.429208 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d"} err="failed to get container status \"2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d\": rpc error: code = NotFound desc = could not find container \"2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d\": container with ID starting with 2070a163637df5ef2912b5ae6193d8c6a469ad7618d0121258a73de4773df98d not found: ID does not exist" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.429229 4930 scope.go:117] "RemoveContainer" containerID="0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a" Nov 24 12:20:24 crc kubenswrapper[4930]: E1124 12:20:24.429634 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a\": container with ID starting with 0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a not found: ID does not exist" containerID="0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.429675 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a"} err="failed to get container status \"0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a\": rpc error: code = NotFound desc = could not find container \"0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a\": container with ID starting with 0881b547c4c2e31ed4e67f5a73adbf8448735700219670fe6dabd9d75186822a not found: ID does not exist" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.429705 4930 scope.go:117] "RemoveContainer" containerID="a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.441648 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkc4h\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-kube-api-access-bkc4h\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.441684 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-plugins\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.441713 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-config-data\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.441733 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-tls\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.441816 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-plugins-conf\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.441862 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d35e6340-889e-4150-90c7-059417befffd-erlang-cookie-secret\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.441917 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-confd\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.441974 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d35e6340-889e-4150-90c7-059417befffd-pod-info\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.441999 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-erlang-cookie\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.442013 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-server-conf\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.442063 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"d35e6340-889e-4150-90c7-059417befffd\" (UID: \"d35e6340-889e-4150-90c7-059417befffd\") " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.442105 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.442482 4930 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.442810 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.446461 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.448592 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:20:24 crc kubenswrapper[4930]: E1124 12:20:24.449016 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270a64e1-2837-47ac-860f-d616efdc6bbc" containerName="setup-container" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.449032 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="270a64e1-2837-47ac-860f-d616efdc6bbc" containerName="setup-container" Nov 24 12:20:24 crc kubenswrapper[4930]: E1124 12:20:24.449061 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270a64e1-2837-47ac-860f-d616efdc6bbc" containerName="rabbitmq" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.449068 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="270a64e1-2837-47ac-860f-d616efdc6bbc" containerName="rabbitmq" Nov 24 12:20:24 crc kubenswrapper[4930]: E1124 12:20:24.449092 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35e6340-889e-4150-90c7-059417befffd" containerName="rabbitmq" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.449098 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35e6340-889e-4150-90c7-059417befffd" containerName="rabbitmq" Nov 24 12:20:24 crc kubenswrapper[4930]: E1124 12:20:24.449114 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35e6340-889e-4150-90c7-059417befffd" containerName="setup-container" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.449121 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35e6340-889e-4150-90c7-059417befffd" containerName="setup-container" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.449297 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="270a64e1-2837-47ac-860f-d616efdc6bbc" containerName="rabbitmq" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.449307 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35e6340-889e-4150-90c7-059417befffd" containerName="rabbitmq" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.449860 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d35e6340-889e-4150-90c7-059417befffd-pod-info" (OuterVolumeSpecName: "pod-info") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.449980 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.450326 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.452702 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d35e6340-889e-4150-90c7-059417befffd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.453866 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.454118 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.454275 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.454426 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.454711 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.456422 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.456630 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.460178 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-kube-api-access-bkc4h" (OuterVolumeSpecName: "kube-api-access-bkc4h") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "kube-api-access-bkc4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.461816 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.463861 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xm22l" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.468426 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-config-data" (OuterVolumeSpecName: "config-data") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.471500 4930 scope.go:117] "RemoveContainer" containerID="d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.511724 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-server-conf" (OuterVolumeSpecName: "server-conf") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.544354 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5fe79a3-de03-466f-bf55-2d8c8259895a-config-data\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.544396 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.544430 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5fe79a3-de03-466f-bf55-2d8c8259895a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.544655 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.544754 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.544774 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5fe79a3-de03-466f-bf55-2d8c8259895a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.544803 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.544821 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5fe79a3-de03-466f-bf55-2d8c8259895a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.544867 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5fe79a3-de03-466f-bf55-2d8c8259895a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.544943 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9vx\" (UniqueName: \"kubernetes.io/projected/a5fe79a3-de03-466f-bf55-2d8c8259895a-kube-api-access-4d9vx\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.545965 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.546085 4930 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.546097 4930 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.546139 4930 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d35e6340-889e-4150-90c7-059417befffd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.546152 4930 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d35e6340-889e-4150-90c7-059417befffd-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.546160 4930 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.546171 4930 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.546191 4930 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.546219 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkc4h\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-kube-api-access-bkc4h\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.546229 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d35e6340-889e-4150-90c7-059417befffd-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.565256 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d35e6340-889e-4150-90c7-059417befffd" (UID: "d35e6340-889e-4150-90c7-059417befffd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.568026 4930 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655298 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655384 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5fe79a3-de03-466f-bf55-2d8c8259895a-config-data\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655423 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655460 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5fe79a3-de03-466f-bf55-2d8c8259895a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655557 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655588 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655611 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5fe79a3-de03-466f-bf55-2d8c8259895a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655638 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655660 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5fe79a3-de03-466f-bf55-2d8c8259895a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655683 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5fe79a3-de03-466f-bf55-2d8c8259895a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655734 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d9vx\" (UniqueName: \"kubernetes.io/projected/a5fe79a3-de03-466f-bf55-2d8c8259895a-kube-api-access-4d9vx\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655914 4930 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d35e6340-889e-4150-90c7-059417befffd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.655926 4930 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.660008 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.662003 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.665257 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.666450 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5fe79a3-de03-466f-bf55-2d8c8259895a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.666627 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5fe79a3-de03-466f-bf55-2d8c8259895a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.668143 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5fe79a3-de03-466f-bf55-2d8c8259895a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.669496 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5fe79a3-de03-466f-bf55-2d8c8259895a-config-data\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.671215 4930 scope.go:117] "RemoveContainer" containerID="a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf" Nov 24 12:20:24 crc kubenswrapper[4930]: E1124 12:20:24.672167 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf\": container with ID starting with a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf not found: ID does not exist" containerID="a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.672336 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf"} err="failed to get container status \"a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf\": rpc error: code = NotFound desc = could not find container \"a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf\": container with ID starting with a835f07f63a3e2b2e883d951626ad566d7e97283aea6a0166eddff40548bcfdf not found: ID does not exist" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.672393 4930 scope.go:117] "RemoveContainer" containerID="d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1" Nov 24 12:20:24 crc kubenswrapper[4930]: E1124 12:20:24.674044 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1\": container with ID starting with d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1 not found: ID does not exist" containerID="d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.674111 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1"} err="failed to get container status \"d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1\": rpc error: code = NotFound desc = could not find container \"d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1\": container with ID starting with d60a7b8775c78678336cadec05e6522445fc860e50de6e16c9d5f9b3ff8c35c1 not found: ID does not exist" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.676011 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.679704 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5fe79a3-de03-466f-bf55-2d8c8259895a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.681396 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5fe79a3-de03-466f-bf55-2d8c8259895a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.684395 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d9vx\" (UniqueName: \"kubernetes.io/projected/a5fe79a3-de03-466f-bf55-2d8c8259895a-kube-api-access-4d9vx\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.706037 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a5fe79a3-de03-466f-bf55-2d8c8259895a\") " pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.745595 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.753713 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.769370 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.771659 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.775806 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.776032 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bbrsh" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.776201 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.776351 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.776498 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.777355 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.779905 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.791947 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.812948 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.860724 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.860802 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.860849 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.860868 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.860883 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgxvf\" (UniqueName: \"kubernetes.io/projected/2247968a-aee9-4461-afd9-cfb36cc1f6fd-kube-api-access-bgxvf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.860910 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2247968a-aee9-4461-afd9-cfb36cc1f6fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.860940 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2247968a-aee9-4461-afd9-cfb36cc1f6fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.860959 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2247968a-aee9-4461-afd9-cfb36cc1f6fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.860985 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.861002 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2247968a-aee9-4461-afd9-cfb36cc1f6fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.861035 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2247968a-aee9-4461-afd9-cfb36cc1f6fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.962979 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.964354 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2247968a-aee9-4461-afd9-cfb36cc1f6fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.964423 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2247968a-aee9-4461-afd9-cfb36cc1f6fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.964504 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.964581 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.964660 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.964693 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.964723 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgxvf\" (UniqueName: \"kubernetes.io/projected/2247968a-aee9-4461-afd9-cfb36cc1f6fd-kube-api-access-bgxvf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.964777 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2247968a-aee9-4461-afd9-cfb36cc1f6fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.964837 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2247968a-aee9-4461-afd9-cfb36cc1f6fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.964863 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2247968a-aee9-4461-afd9-cfb36cc1f6fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.965768 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2247968a-aee9-4461-afd9-cfb36cc1f6fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.965949 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.966223 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.966348 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2247968a-aee9-4461-afd9-cfb36cc1f6fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.966526 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.967979 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2247968a-aee9-4461-afd9-cfb36cc1f6fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.970412 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2247968a-aee9-4461-afd9-cfb36cc1f6fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.970432 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2247968a-aee9-4461-afd9-cfb36cc1f6fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.977229 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.977300 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2247968a-aee9-4461-afd9-cfb36cc1f6fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.991120 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgxvf\" (UniqueName: \"kubernetes.io/projected/2247968a-aee9-4461-afd9-cfb36cc1f6fd-kube-api-access-bgxvf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:24 crc kubenswrapper[4930]: I1124 12:20:24.994787 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2247968a-aee9-4461-afd9-cfb36cc1f6fd\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.100437 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.251219 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.348708 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5fe79a3-de03-466f-bf55-2d8c8259895a","Type":"ContainerStarted","Data":"357d59ac94921f426eeb6557cb906a4b6967c754705c711d06a339d10ae3bc47"} Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.554431 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:20:25 crc kubenswrapper[4930]: W1124 12:20:25.558937 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2247968a_aee9_4461_afd9_cfb36cc1f6fd.slice/crio-1368391f701b2173685245f3344d571e86f45325e9f4c940d710b9f277fefa53 WatchSource:0}: Error finding container 1368391f701b2173685245f3344d571e86f45325e9f4c940d710b9f277fefa53: Status 404 returned error can't find the container with id 1368391f701b2173685245f3344d571e86f45325e9f4c940d710b9f277fefa53 Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.885595 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-844899475f-8dxjm"] Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.888079 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.890193 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.899926 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844899475f-8dxjm"] Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.983206 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn6wq\" (UniqueName: \"kubernetes.io/projected/0eed9e46-1c7d-437c-a01c-7b40e76e1140-kube-api-access-bn6wq\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.983282 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-svc\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.983367 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-swift-storage-0\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.983393 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-config\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.983554 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-nb\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.983610 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-sb\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:25 crc kubenswrapper[4930]: I1124 12:20:25.983756 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-openstack-edpm-ipam\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.085815 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn6wq\" (UniqueName: \"kubernetes.io/projected/0eed9e46-1c7d-437c-a01c-7b40e76e1140-kube-api-access-bn6wq\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.085878 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-svc\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.085929 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-swift-storage-0\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.085955 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-config\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.085993 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-nb\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.086010 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-sb\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.086060 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-openstack-edpm-ipam\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.086885 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-config\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.086906 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-svc\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.087160 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-nb\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.087238 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-openstack-edpm-ipam\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.087489 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-swift-storage-0\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.087585 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-sb\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.095276 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="270a64e1-2837-47ac-860f-d616efdc6bbc" path="/var/lib/kubelet/pods/270a64e1-2837-47ac-860f-d616efdc6bbc/volumes" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.096434 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35e6340-889e-4150-90c7-059417befffd" path="/var/lib/kubelet/pods/d35e6340-889e-4150-90c7-059417befffd/volumes" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.169145 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn6wq\" (UniqueName: \"kubernetes.io/projected/0eed9e46-1c7d-437c-a01c-7b40e76e1140-kube-api-access-bn6wq\") pod \"dnsmasq-dns-844899475f-8dxjm\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.204355 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.360697 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2247968a-aee9-4461-afd9-cfb36cc1f6fd","Type":"ContainerStarted","Data":"1368391f701b2173685245f3344d571e86f45325e9f4c940d710b9f277fefa53"} Nov 24 12:20:26 crc kubenswrapper[4930]: I1124 12:20:26.632214 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844899475f-8dxjm"] Nov 24 12:20:26 crc kubenswrapper[4930]: W1124 12:20:26.632648 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eed9e46_1c7d_437c_a01c_7b40e76e1140.slice/crio-f2deb23f42acf056c3112d9df88372449b74822338a52e184b5344224c276d4e WatchSource:0}: Error finding container f2deb23f42acf056c3112d9df88372449b74822338a52e184b5344224c276d4e: Status 404 returned error can't find the container with id f2deb23f42acf056c3112d9df88372449b74822338a52e184b5344224c276d4e Nov 24 12:20:27 crc kubenswrapper[4930]: I1124 12:20:27.380843 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2247968a-aee9-4461-afd9-cfb36cc1f6fd","Type":"ContainerStarted","Data":"b7dfde5a0968d7044eb913b3be12be17216dbe29fe868aa9b3db674b6e24d318"} Nov 24 12:20:27 crc kubenswrapper[4930]: I1124 12:20:27.384562 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5fe79a3-de03-466f-bf55-2d8c8259895a","Type":"ContainerStarted","Data":"4f3ca9f8fec31371fb6d05485924446c04cc7c29f83fe645321588072c2b0d30"} Nov 24 12:20:27 crc kubenswrapper[4930]: I1124 12:20:27.387736 4930 generic.go:334] "Generic (PLEG): container finished" podID="0eed9e46-1c7d-437c-a01c-7b40e76e1140" containerID="568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3" exitCode=0 Nov 24 12:20:27 crc kubenswrapper[4930]: I1124 12:20:27.387787 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844899475f-8dxjm" event={"ID":"0eed9e46-1c7d-437c-a01c-7b40e76e1140","Type":"ContainerDied","Data":"568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3"} Nov 24 12:20:27 crc kubenswrapper[4930]: I1124 12:20:27.387815 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844899475f-8dxjm" event={"ID":"0eed9e46-1c7d-437c-a01c-7b40e76e1140","Type":"ContainerStarted","Data":"f2deb23f42acf056c3112d9df88372449b74822338a52e184b5344224c276d4e"} Nov 24 12:20:28 crc kubenswrapper[4930]: I1124 12:20:28.401736 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844899475f-8dxjm" event={"ID":"0eed9e46-1c7d-437c-a01c-7b40e76e1140","Type":"ContainerStarted","Data":"d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f"} Nov 24 12:20:28 crc kubenswrapper[4930]: I1124 12:20:28.402363 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:28 crc kubenswrapper[4930]: I1124 12:20:28.428020 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-844899475f-8dxjm" podStartSLOduration=3.42799441 podStartE2EDuration="3.42799441s" podCreationTimestamp="2025-11-24 12:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:20:28.424368156 +0000 UTC m=+1275.038696116" watchObservedRunningTime="2025-11-24 12:20:28.42799441 +0000 UTC m=+1275.042322370" Nov 24 12:20:31 crc kubenswrapper[4930]: I1124 12:20:31.809484 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:20:31 crc kubenswrapper[4930]: I1124 12:20:31.810149 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:20:31 crc kubenswrapper[4930]: I1124 12:20:31.810237 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:20:31 crc kubenswrapper[4930]: I1124 12:20:31.811041 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b35d5fa3eb364268da5b5e0253eae62e65a2c6dfd8d0e613fb3c92e7e1d100d"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:20:31 crc kubenswrapper[4930]: I1124 12:20:31.811107 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://9b35d5fa3eb364268da5b5e0253eae62e65a2c6dfd8d0e613fb3c92e7e1d100d" gracePeriod=600 Nov 24 12:20:32 crc kubenswrapper[4930]: I1124 12:20:32.455921 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="9b35d5fa3eb364268da5b5e0253eae62e65a2c6dfd8d0e613fb3c92e7e1d100d" exitCode=0 Nov 24 12:20:32 crc kubenswrapper[4930]: I1124 12:20:32.456005 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"9b35d5fa3eb364268da5b5e0253eae62e65a2c6dfd8d0e613fb3c92e7e1d100d"} Nov 24 12:20:32 crc kubenswrapper[4930]: I1124 12:20:32.456359 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"b1d415bebc3dcb6940325a2fd36f17aa3adbf53438534ff2d996acd866d5f23a"} Nov 24 12:20:32 crc kubenswrapper[4930]: I1124 12:20:32.456388 4930 scope.go:117] "RemoveContainer" containerID="df660b89ae8561454b3d98787dfb50644dbca73ff06ad5c87819e47a0f113710" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.207025 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.280884 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-bvhzv"] Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.281755 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" podUID="176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" containerName="dnsmasq-dns" containerID="cri-o://18557a516cf706707ebb3663f6fe5ce1795bc84006acb0a8af4940cf2e9d71b8" gracePeriod=10 Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.463164 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64858ddbd7-fd6z9"] Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.464737 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.488083 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-config\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.488163 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdl5m\" (UniqueName: \"kubernetes.io/projected/9773394a-0a7d-40f6-a556-d3feb5acaf9d-kube-api-access-pdl5m\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.488203 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-openstack-edpm-ipam\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.488455 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-dns-swift-storage-0\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.488481 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-ovsdbserver-sb\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.488520 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-ovsdbserver-nb\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.488616 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-dns-svc\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.501654 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64858ddbd7-fd6z9"] Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.559064 4930 generic.go:334] "Generic (PLEG): container finished" podID="176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" containerID="18557a516cf706707ebb3663f6fe5ce1795bc84006acb0a8af4940cf2e9d71b8" exitCode=0 Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.559129 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" event={"ID":"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac","Type":"ContainerDied","Data":"18557a516cf706707ebb3663f6fe5ce1795bc84006acb0a8af4940cf2e9d71b8"} Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.590516 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-config\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.590903 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdl5m\" (UniqueName: \"kubernetes.io/projected/9773394a-0a7d-40f6-a556-d3feb5acaf9d-kube-api-access-pdl5m\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.590937 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-openstack-edpm-ipam\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.590979 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-dns-swift-storage-0\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.591001 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-ovsdbserver-sb\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.591030 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-ovsdbserver-nb\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.591107 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-dns-svc\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.591473 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-config\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.592049 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-dns-swift-storage-0\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.592158 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-dns-svc\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.592751 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-ovsdbserver-sb\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.594227 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-ovsdbserver-nb\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.597100 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9773394a-0a7d-40f6-a556-d3feb5acaf9d-openstack-edpm-ipam\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.650171 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdl5m\" (UniqueName: \"kubernetes.io/projected/9773394a-0a7d-40f6-a556-d3feb5acaf9d-kube-api-access-pdl5m\") pod \"dnsmasq-dns-64858ddbd7-fd6z9\" (UID: \"9773394a-0a7d-40f6-a556-d3feb5acaf9d\") " pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.807914 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:36 crc kubenswrapper[4930]: I1124 12:20:36.925044 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.000830 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc26l\" (UniqueName: \"kubernetes.io/projected/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-kube-api-access-jc26l\") pod \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.000936 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-nb\") pod \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.001115 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-sb\") pod \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.001150 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-svc\") pod \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.001243 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-config\") pod \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.001267 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-swift-storage-0\") pod \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\" (UID: \"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac\") " Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.013620 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-kube-api-access-jc26l" (OuterVolumeSpecName: "kube-api-access-jc26l") pod "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" (UID: "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac"). InnerVolumeSpecName "kube-api-access-jc26l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.049663 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" (UID: "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.056742 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" (UID: "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.059192 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" (UID: "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.063783 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-config" (OuterVolumeSpecName: "config") pod "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" (UID: "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.086099 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" (UID: "176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.104171 4930 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.104213 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc26l\" (UniqueName: \"kubernetes.io/projected/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-kube-api-access-jc26l\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.104229 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.104241 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.104254 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.104265 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.239612 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64858ddbd7-fd6z9"] Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.569978 4930 generic.go:334] "Generic (PLEG): container finished" podID="9773394a-0a7d-40f6-a556-d3feb5acaf9d" containerID="198beaf9006bd9288ccc8bebfe7fe3864382199f16174a2f25f19035b0a7136d" exitCode=0 Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.570060 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" event={"ID":"9773394a-0a7d-40f6-a556-d3feb5acaf9d","Type":"ContainerDied","Data":"198beaf9006bd9288ccc8bebfe7fe3864382199f16174a2f25f19035b0a7136d"} Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.570090 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" event={"ID":"9773394a-0a7d-40f6-a556-d3feb5acaf9d","Type":"ContainerStarted","Data":"b1008da33f1c7d996d987e886065ea698ad78e6e7444ce956ed5feff6a6a1543"} Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.573239 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" event={"ID":"176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac","Type":"ContainerDied","Data":"a4b840347e97b8f6e2dedab6cce05b641d0307066a226298ae333bb9368f3c20"} Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.573282 4930 scope.go:117] "RemoveContainer" containerID="18557a516cf706707ebb3663f6fe5ce1795bc84006acb0a8af4940cf2e9d71b8" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.573300 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-bvhzv" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.716202 4930 scope.go:117] "RemoveContainer" containerID="7df6d2b92db9207da316dbe87e6ae0f67d35d54f6b2e9b032f8097c5b9c896e7" Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.750317 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-bvhzv"] Nov 24 12:20:37 crc kubenswrapper[4930]: I1124 12:20:37.757886 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-bvhzv"] Nov 24 12:20:38 crc kubenswrapper[4930]: I1124 12:20:38.095302 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" path="/var/lib/kubelet/pods/176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac/volumes" Nov 24 12:20:38 crc kubenswrapper[4930]: I1124 12:20:38.584334 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" event={"ID":"9773394a-0a7d-40f6-a556-d3feb5acaf9d","Type":"ContainerStarted","Data":"5b66aa62b8fffc3c3cb549bec89bc2aa583368664271f5a14c3164a6294b1fc0"} Nov 24 12:20:38 crc kubenswrapper[4930]: I1124 12:20:38.584837 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:38 crc kubenswrapper[4930]: I1124 12:20:38.603688 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" podStartSLOduration=2.60367168 podStartE2EDuration="2.60367168s" podCreationTimestamp="2025-11-24 12:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:20:38.600593882 +0000 UTC m=+1285.214921832" watchObservedRunningTime="2025-11-24 12:20:38.60367168 +0000 UTC m=+1285.217999630" Nov 24 12:20:46 crc kubenswrapper[4930]: I1124 12:20:46.809856 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64858ddbd7-fd6z9" Nov 24 12:20:46 crc kubenswrapper[4930]: I1124 12:20:46.894557 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844899475f-8dxjm"] Nov 24 12:20:46 crc kubenswrapper[4930]: I1124 12:20:46.895159 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-844899475f-8dxjm" podUID="0eed9e46-1c7d-437c-a01c-7b40e76e1140" containerName="dnsmasq-dns" containerID="cri-o://d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f" gracePeriod=10 Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.429842 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.599474 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-openstack-edpm-ipam\") pod \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.599527 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-svc\") pod \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.599636 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-config\") pod \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.599657 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-sb\") pod \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.599756 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn6wq\" (UniqueName: \"kubernetes.io/projected/0eed9e46-1c7d-437c-a01c-7b40e76e1140-kube-api-access-bn6wq\") pod \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.599778 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-nb\") pod \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.599833 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-swift-storage-0\") pod \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\" (UID: \"0eed9e46-1c7d-437c-a01c-7b40e76e1140\") " Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.608097 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eed9e46-1c7d-437c-a01c-7b40e76e1140-kube-api-access-bn6wq" (OuterVolumeSpecName: "kube-api-access-bn6wq") pod "0eed9e46-1c7d-437c-a01c-7b40e76e1140" (UID: "0eed9e46-1c7d-437c-a01c-7b40e76e1140"). InnerVolumeSpecName "kube-api-access-bn6wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.657729 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-config" (OuterVolumeSpecName: "config") pod "0eed9e46-1c7d-437c-a01c-7b40e76e1140" (UID: "0eed9e46-1c7d-437c-a01c-7b40e76e1140"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.658962 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0eed9e46-1c7d-437c-a01c-7b40e76e1140" (UID: "0eed9e46-1c7d-437c-a01c-7b40e76e1140"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.660258 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0eed9e46-1c7d-437c-a01c-7b40e76e1140" (UID: "0eed9e46-1c7d-437c-a01c-7b40e76e1140"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.661218 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0eed9e46-1c7d-437c-a01c-7b40e76e1140" (UID: "0eed9e46-1c7d-437c-a01c-7b40e76e1140"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.666373 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0eed9e46-1c7d-437c-a01c-7b40e76e1140" (UID: "0eed9e46-1c7d-437c-a01c-7b40e76e1140"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.679519 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "0eed9e46-1c7d-437c-a01c-7b40e76e1140" (UID: "0eed9e46-1c7d-437c-a01c-7b40e76e1140"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.684477 4930 generic.go:334] "Generic (PLEG): container finished" podID="0eed9e46-1c7d-437c-a01c-7b40e76e1140" containerID="d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f" exitCode=0 Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.684521 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844899475f-8dxjm" event={"ID":"0eed9e46-1c7d-437c-a01c-7b40e76e1140","Type":"ContainerDied","Data":"d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f"} Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.684579 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844899475f-8dxjm" event={"ID":"0eed9e46-1c7d-437c-a01c-7b40e76e1140","Type":"ContainerDied","Data":"f2deb23f42acf056c3112d9df88372449b74822338a52e184b5344224c276d4e"} Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.684597 4930 scope.go:117] "RemoveContainer" containerID="d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.684726 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844899475f-8dxjm" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.703012 4930 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.703266 4930 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.703279 4930 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.703288 4930 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.703297 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.703306 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn6wq\" (UniqueName: \"kubernetes.io/projected/0eed9e46-1c7d-437c-a01c-7b40e76e1140-kube-api-access-bn6wq\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.703316 4930 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eed9e46-1c7d-437c-a01c-7b40e76e1140-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.760302 4930 scope.go:117] "RemoveContainer" containerID="568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.767955 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844899475f-8dxjm"] Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.776667 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-844899475f-8dxjm"] Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.790055 4930 scope.go:117] "RemoveContainer" containerID="d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f" Nov 24 12:20:47 crc kubenswrapper[4930]: E1124 12:20:47.790496 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f\": container with ID starting with d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f not found: ID does not exist" containerID="d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.790522 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f"} err="failed to get container status \"d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f\": rpc error: code = NotFound desc = could not find container \"d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f\": container with ID starting with d2fcf86ec81eb77d0f5ed6d436281c6dafd698be53740ddc65428e899709576f not found: ID does not exist" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.790562 4930 scope.go:117] "RemoveContainer" containerID="568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3" Nov 24 12:20:47 crc kubenswrapper[4930]: E1124 12:20:47.791002 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3\": container with ID starting with 568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3 not found: ID does not exist" containerID="568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3" Nov 24 12:20:47 crc kubenswrapper[4930]: I1124 12:20:47.791025 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3"} err="failed to get container status \"568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3\": rpc error: code = NotFound desc = could not find container \"568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3\": container with ID starting with 568e3d5bf6b2a1b6ffd1ff8e8fd0cd7c43b950d8d31a8481e41e5c6a81f227b3 not found: ID does not exist" Nov 24 12:20:48 crc kubenswrapper[4930]: I1124 12:20:48.096286 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eed9e46-1c7d-437c-a01c-7b40e76e1140" path="/var/lib/kubelet/pods/0eed9e46-1c7d-437c-a01c-7b40e76e1140/volumes" Nov 24 12:20:59 crc kubenswrapper[4930]: I1124 12:20:59.821854 4930 generic.go:334] "Generic (PLEG): container finished" podID="2247968a-aee9-4461-afd9-cfb36cc1f6fd" containerID="b7dfde5a0968d7044eb913b3be12be17216dbe29fe868aa9b3db674b6e24d318" exitCode=0 Nov 24 12:20:59 crc kubenswrapper[4930]: I1124 12:20:59.821939 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2247968a-aee9-4461-afd9-cfb36cc1f6fd","Type":"ContainerDied","Data":"b7dfde5a0968d7044eb913b3be12be17216dbe29fe868aa9b3db674b6e24d318"} Nov 24 12:20:59 crc kubenswrapper[4930]: I1124 12:20:59.824730 4930 generic.go:334] "Generic (PLEG): container finished" podID="a5fe79a3-de03-466f-bf55-2d8c8259895a" containerID="4f3ca9f8fec31371fb6d05485924446c04cc7c29f83fe645321588072c2b0d30" exitCode=0 Nov 24 12:20:59 crc kubenswrapper[4930]: I1124 12:20:59.824765 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5fe79a3-de03-466f-bf55-2d8c8259895a","Type":"ContainerDied","Data":"4f3ca9f8fec31371fb6d05485924446c04cc7c29f83fe645321588072c2b0d30"} Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.229461 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s"] Nov 24 12:21:00 crc kubenswrapper[4930]: E1124 12:21:00.230250 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" containerName="init" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.230266 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" containerName="init" Nov 24 12:21:00 crc kubenswrapper[4930]: E1124 12:21:00.230287 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eed9e46-1c7d-437c-a01c-7b40e76e1140" containerName="init" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.230293 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eed9e46-1c7d-437c-a01c-7b40e76e1140" containerName="init" Nov 24 12:21:00 crc kubenswrapper[4930]: E1124 12:21:00.230309 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" containerName="dnsmasq-dns" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.230315 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" containerName="dnsmasq-dns" Nov 24 12:21:00 crc kubenswrapper[4930]: E1124 12:21:00.230334 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eed9e46-1c7d-437c-a01c-7b40e76e1140" containerName="dnsmasq-dns" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.230340 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eed9e46-1c7d-437c-a01c-7b40e76e1140" containerName="dnsmasq-dns" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.230517 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eed9e46-1c7d-437c-a01c-7b40e76e1140" containerName="dnsmasq-dns" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.230548 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="176fbd8b-99bc-4ad1-b85d-4db1c2fb3eac" containerName="dnsmasq-dns" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.231323 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.235182 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.235883 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.236127 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.237834 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.239891 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s"] Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.332115 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.332212 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.332277 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz8l7\" (UniqueName: \"kubernetes.io/projected/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-kube-api-access-bz8l7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.332302 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.434407 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz8l7\" (UniqueName: \"kubernetes.io/projected/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-kube-api-access-bz8l7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.434464 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.434509 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.434591 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.439284 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.439553 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.449498 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.450260 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz8l7\" (UniqueName: \"kubernetes.io/projected/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-kube-api-access-bz8l7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.546305 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.842100 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5fe79a3-de03-466f-bf55-2d8c8259895a","Type":"ContainerStarted","Data":"290b5480c6e9c816c6195433c1ba5eabb0c7f04947f5cdc2354f9433c9cf0b1b"} Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.843284 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.844911 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2247968a-aee9-4461-afd9-cfb36cc1f6fd","Type":"ContainerStarted","Data":"69eb2164ad56e1553a8aadf77cb168d1cb518fea5d2782b04eee286c9ea52ec0"} Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.845385 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.865300 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.865284976 podStartE2EDuration="36.865284976s" podCreationTimestamp="2025-11-24 12:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:21:00.863895116 +0000 UTC m=+1307.478223076" watchObservedRunningTime="2025-11-24 12:21:00.865284976 +0000 UTC m=+1307.479612916" Nov 24 12:21:00 crc kubenswrapper[4930]: I1124 12:21:00.901574 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.901551983 podStartE2EDuration="36.901551983s" podCreationTimestamp="2025-11-24 12:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:21:00.885711195 +0000 UTC m=+1307.500039165" watchObservedRunningTime="2025-11-24 12:21:00.901551983 +0000 UTC m=+1307.515879943" Nov 24 12:21:01 crc kubenswrapper[4930]: I1124 12:21:01.115192 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s"] Nov 24 12:21:01 crc kubenswrapper[4930]: I1124 12:21:01.124829 4930 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:21:01 crc kubenswrapper[4930]: I1124 12:21:01.855454 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" event={"ID":"29211cc5-c7d0-4aa9-9456-3313e20d2e1d","Type":"ContainerStarted","Data":"5ed872116fa76ea8ebe52d7f505a1ef7c2646c7472fe2953fc65a56820978110"} Nov 24 12:21:10 crc kubenswrapper[4930]: I1124 12:21:10.967614 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" event={"ID":"29211cc5-c7d0-4aa9-9456-3313e20d2e1d","Type":"ContainerStarted","Data":"62c843a80c14c39d660ac6814bd97605b922f7eacb858d458e19c557306ed3ce"} Nov 24 12:21:10 crc kubenswrapper[4930]: I1124 12:21:10.993456 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" podStartSLOduration=2.067074693 podStartE2EDuration="10.993439743s" podCreationTimestamp="2025-11-24 12:21:00 +0000 UTC" firstStartedPulling="2025-11-24 12:21:01.124364026 +0000 UTC m=+1307.738691976" lastFinishedPulling="2025-11-24 12:21:10.050729076 +0000 UTC m=+1316.665057026" observedRunningTime="2025-11-24 12:21:10.987882013 +0000 UTC m=+1317.602209973" watchObservedRunningTime="2025-11-24 12:21:10.993439743 +0000 UTC m=+1317.607767693" Nov 24 12:21:14 crc kubenswrapper[4930]: I1124 12:21:14.795072 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 12:21:15 crc kubenswrapper[4930]: I1124 12:21:15.103718 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:21:21 crc kubenswrapper[4930]: I1124 12:21:21.082923 4930 generic.go:334] "Generic (PLEG): container finished" podID="29211cc5-c7d0-4aa9-9456-3313e20d2e1d" containerID="62c843a80c14c39d660ac6814bd97605b922f7eacb858d458e19c557306ed3ce" exitCode=0 Nov 24 12:21:21 crc kubenswrapper[4930]: I1124 12:21:21.083015 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" event={"ID":"29211cc5-c7d0-4aa9-9456-3313e20d2e1d","Type":"ContainerDied","Data":"62c843a80c14c39d660ac6814bd97605b922f7eacb858d458e19c557306ed3ce"} Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.490784 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.491892 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-repo-setup-combined-ca-bundle\") pod \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.492035 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz8l7\" (UniqueName: \"kubernetes.io/projected/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-kube-api-access-bz8l7\") pod \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.492063 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-inventory\") pod \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.492105 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-ssh-key\") pod \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\" (UID: \"29211cc5-c7d0-4aa9-9456-3313e20d2e1d\") " Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.498652 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "29211cc5-c7d0-4aa9-9456-3313e20d2e1d" (UID: "29211cc5-c7d0-4aa9-9456-3313e20d2e1d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.498669 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-kube-api-access-bz8l7" (OuterVolumeSpecName: "kube-api-access-bz8l7") pod "29211cc5-c7d0-4aa9-9456-3313e20d2e1d" (UID: "29211cc5-c7d0-4aa9-9456-3313e20d2e1d"). InnerVolumeSpecName "kube-api-access-bz8l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.560652 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "29211cc5-c7d0-4aa9-9456-3313e20d2e1d" (UID: "29211cc5-c7d0-4aa9-9456-3313e20d2e1d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.564404 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-inventory" (OuterVolumeSpecName: "inventory") pod "29211cc5-c7d0-4aa9-9456-3313e20d2e1d" (UID: "29211cc5-c7d0-4aa9-9456-3313e20d2e1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.598204 4930 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.598255 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz8l7\" (UniqueName: \"kubernetes.io/projected/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-kube-api-access-bz8l7\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.598273 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:22 crc kubenswrapper[4930]: I1124 12:21:22.598291 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29211cc5-c7d0-4aa9-9456-3313e20d2e1d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.109878 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" event={"ID":"29211cc5-c7d0-4aa9-9456-3313e20d2e1d","Type":"ContainerDied","Data":"5ed872116fa76ea8ebe52d7f505a1ef7c2646c7472fe2953fc65a56820978110"} Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.109927 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.109930 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ed872116fa76ea8ebe52d7f505a1ef7c2646c7472fe2953fc65a56820978110" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.177707 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j"] Nov 24 12:21:23 crc kubenswrapper[4930]: E1124 12:21:23.178596 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29211cc5-c7d0-4aa9-9456-3313e20d2e1d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.178664 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="29211cc5-c7d0-4aa9-9456-3313e20d2e1d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.179016 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="29211cc5-c7d0-4aa9-9456-3313e20d2e1d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.180267 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.182522 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.182596 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.182645 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.182827 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.190756 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j"] Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.310947 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhk9k\" (UniqueName: \"kubernetes.io/projected/7b4b0309-31fd-407f-a03f-df928fd4675b-kube-api-access-vhk9k\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcq7j\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.311323 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcq7j\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.311582 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcq7j\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.414135 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhk9k\" (UniqueName: \"kubernetes.io/projected/7b4b0309-31fd-407f-a03f-df928fd4675b-kube-api-access-vhk9k\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcq7j\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.414209 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcq7j\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.414305 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcq7j\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.424636 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcq7j\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.424690 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcq7j\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.452299 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhk9k\" (UniqueName: \"kubernetes.io/projected/7b4b0309-31fd-407f-a03f-df928fd4675b-kube-api-access-vhk9k\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcq7j\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:23 crc kubenswrapper[4930]: I1124 12:21:23.500269 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:24 crc kubenswrapper[4930]: I1124 12:21:24.041701 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j"] Nov 24 12:21:24 crc kubenswrapper[4930]: W1124 12:21:24.043282 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b4b0309_31fd_407f_a03f_df928fd4675b.slice/crio-edd62e2637f9a31463ec4d253519a77b70ed3764845d7ccf04002bbf93b1ae47 WatchSource:0}: Error finding container edd62e2637f9a31463ec4d253519a77b70ed3764845d7ccf04002bbf93b1ae47: Status 404 returned error can't find the container with id edd62e2637f9a31463ec4d253519a77b70ed3764845d7ccf04002bbf93b1ae47 Nov 24 12:21:24 crc kubenswrapper[4930]: I1124 12:21:24.123618 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" event={"ID":"7b4b0309-31fd-407f-a03f-df928fd4675b","Type":"ContainerStarted","Data":"edd62e2637f9a31463ec4d253519a77b70ed3764845d7ccf04002bbf93b1ae47"} Nov 24 12:21:25 crc kubenswrapper[4930]: I1124 12:21:25.133451 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" event={"ID":"7b4b0309-31fd-407f-a03f-df928fd4675b","Type":"ContainerStarted","Data":"210ff7bac6cfa12f7290638e35af3aaca00b9a8c7b4834e7eb6026c9b226d464"} Nov 24 12:21:25 crc kubenswrapper[4930]: I1124 12:21:25.161357 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" podStartSLOduration=1.380570268 podStartE2EDuration="2.161335439s" podCreationTimestamp="2025-11-24 12:21:23 +0000 UTC" firstStartedPulling="2025-11-24 12:21:24.053954528 +0000 UTC m=+1330.668282478" lastFinishedPulling="2025-11-24 12:21:24.834719699 +0000 UTC m=+1331.449047649" observedRunningTime="2025-11-24 12:21:25.155672345 +0000 UTC m=+1331.770000305" watchObservedRunningTime="2025-11-24 12:21:25.161335439 +0000 UTC m=+1331.775663389" Nov 24 12:21:28 crc kubenswrapper[4930]: I1124 12:21:28.166754 4930 generic.go:334] "Generic (PLEG): container finished" podID="7b4b0309-31fd-407f-a03f-df928fd4675b" containerID="210ff7bac6cfa12f7290638e35af3aaca00b9a8c7b4834e7eb6026c9b226d464" exitCode=0 Nov 24 12:21:28 crc kubenswrapper[4930]: I1124 12:21:28.166854 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" event={"ID":"7b4b0309-31fd-407f-a03f-df928fd4675b","Type":"ContainerDied","Data":"210ff7bac6cfa12f7290638e35af3aaca00b9a8c7b4834e7eb6026c9b226d464"} Nov 24 12:21:29 crc kubenswrapper[4930]: I1124 12:21:29.628327 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:29 crc kubenswrapper[4930]: I1124 12:21:29.640214 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-ssh-key\") pod \"7b4b0309-31fd-407f-a03f-df928fd4675b\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " Nov 24 12:21:29 crc kubenswrapper[4930]: I1124 12:21:29.640269 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-inventory\") pod \"7b4b0309-31fd-407f-a03f-df928fd4675b\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " Nov 24 12:21:29 crc kubenswrapper[4930]: I1124 12:21:29.640501 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhk9k\" (UniqueName: \"kubernetes.io/projected/7b4b0309-31fd-407f-a03f-df928fd4675b-kube-api-access-vhk9k\") pod \"7b4b0309-31fd-407f-a03f-df928fd4675b\" (UID: \"7b4b0309-31fd-407f-a03f-df928fd4675b\") " Nov 24 12:21:29 crc kubenswrapper[4930]: I1124 12:21:29.652036 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4b0309-31fd-407f-a03f-df928fd4675b-kube-api-access-vhk9k" (OuterVolumeSpecName: "kube-api-access-vhk9k") pod "7b4b0309-31fd-407f-a03f-df928fd4675b" (UID: "7b4b0309-31fd-407f-a03f-df928fd4675b"). InnerVolumeSpecName "kube-api-access-vhk9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:21:29 crc kubenswrapper[4930]: I1124 12:21:29.680110 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-inventory" (OuterVolumeSpecName: "inventory") pod "7b4b0309-31fd-407f-a03f-df928fd4675b" (UID: "7b4b0309-31fd-407f-a03f-df928fd4675b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:21:29 crc kubenswrapper[4930]: I1124 12:21:29.685096 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7b4b0309-31fd-407f-a03f-df928fd4675b" (UID: "7b4b0309-31fd-407f-a03f-df928fd4675b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:21:29 crc kubenswrapper[4930]: I1124 12:21:29.742399 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:29 crc kubenswrapper[4930]: I1124 12:21:29.742438 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b4b0309-31fd-407f-a03f-df928fd4675b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:29 crc kubenswrapper[4930]: I1124 12:21:29.742449 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhk9k\" (UniqueName: \"kubernetes.io/projected/7b4b0309-31fd-407f-a03f-df928fd4675b-kube-api-access-vhk9k\") on node \"crc\" DevicePath \"\"" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.189659 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" event={"ID":"7b4b0309-31fd-407f-a03f-df928fd4675b","Type":"ContainerDied","Data":"edd62e2637f9a31463ec4d253519a77b70ed3764845d7ccf04002bbf93b1ae47"} Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.189723 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcq7j" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.189730 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edd62e2637f9a31463ec4d253519a77b70ed3764845d7ccf04002bbf93b1ae47" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.301967 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l"] Nov 24 12:21:30 crc kubenswrapper[4930]: E1124 12:21:30.302596 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4b0309-31fd-407f-a03f-df928fd4675b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.302613 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4b0309-31fd-407f-a03f-df928fd4675b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.302917 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4b0309-31fd-407f-a03f-df928fd4675b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.304241 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.309113 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.309416 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.310658 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.313522 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.323906 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l"] Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.456422 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.456484 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbkzc\" (UniqueName: \"kubernetes.io/projected/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-kube-api-access-nbkzc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.456532 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.456971 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.559579 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.559680 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.559717 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbkzc\" (UniqueName: \"kubernetes.io/projected/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-kube-api-access-nbkzc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.559767 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.565015 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.565043 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.565406 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.589441 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbkzc\" (UniqueName: \"kubernetes.io/projected/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-kube-api-access-nbkzc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:30 crc kubenswrapper[4930]: I1124 12:21:30.629105 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:21:31 crc kubenswrapper[4930]: I1124 12:21:31.171719 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l"] Nov 24 12:21:31 crc kubenswrapper[4930]: I1124 12:21:31.198597 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" event={"ID":"c3f7af8b-b5d0-4361-ada0-42f01955a7d5","Type":"ContainerStarted","Data":"0a642af9ce9a70896375b997632a151439b993d9dffa435b0bfec388f43e8c85"} Nov 24 12:21:32 crc kubenswrapper[4930]: I1124 12:21:32.210343 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" event={"ID":"c3f7af8b-b5d0-4361-ada0-42f01955a7d5","Type":"ContainerStarted","Data":"229cf5d551fa6d4bf0c4353f2bbf32b6902253eccf102fe0415c7accaa2ca281"} Nov 24 12:21:32 crc kubenswrapper[4930]: I1124 12:21:32.236884 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" podStartSLOduration=1.807058584 podStartE2EDuration="2.236860123s" podCreationTimestamp="2025-11-24 12:21:30 +0000 UTC" firstStartedPulling="2025-11-24 12:21:31.177570641 +0000 UTC m=+1337.791898591" lastFinishedPulling="2025-11-24 12:21:31.60737218 +0000 UTC m=+1338.221700130" observedRunningTime="2025-11-24 12:21:32.227706339 +0000 UTC m=+1338.842034299" watchObservedRunningTime="2025-11-24 12:21:32.236860123 +0000 UTC m=+1338.851188073" Nov 24 12:22:17 crc kubenswrapper[4930]: I1124 12:22:17.723452 4930 scope.go:117] "RemoveContainer" containerID="df70ae7af6a7506287afaa9a3009e3f0a234734d2aec886c043e992c8965b2c0" Nov 24 12:22:17 crc kubenswrapper[4930]: I1124 12:22:17.750205 4930 scope.go:117] "RemoveContainer" containerID="3f3df614aab9676be05589959fc29e0c09f36b69b61c72d2a912a1774e5702ea" Nov 24 12:23:01 crc kubenswrapper[4930]: I1124 12:23:01.809840 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:23:01 crc kubenswrapper[4930]: I1124 12:23:01.810563 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:23:17 crc kubenswrapper[4930]: I1124 12:23:17.863910 4930 scope.go:117] "RemoveContainer" containerID="d7e4690fd430981733b5b10b95243943f344eb7c87ebf4f521680c069ae3320f" Nov 24 12:23:31 crc kubenswrapper[4930]: I1124 12:23:31.809689 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:23:31 crc kubenswrapper[4930]: I1124 12:23:31.810738 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:24:01 crc kubenswrapper[4930]: I1124 12:24:01.809527 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:24:01 crc kubenswrapper[4930]: I1124 12:24:01.809982 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:24:01 crc kubenswrapper[4930]: I1124 12:24:01.810025 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:24:01 crc kubenswrapper[4930]: I1124 12:24:01.810707 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1d415bebc3dcb6940325a2fd36f17aa3adbf53438534ff2d996acd866d5f23a"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:24:01 crc kubenswrapper[4930]: I1124 12:24:01.810750 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://b1d415bebc3dcb6940325a2fd36f17aa3adbf53438534ff2d996acd866d5f23a" gracePeriod=600 Nov 24 12:24:02 crc kubenswrapper[4930]: I1124 12:24:02.642969 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="b1d415bebc3dcb6940325a2fd36f17aa3adbf53438534ff2d996acd866d5f23a" exitCode=0 Nov 24 12:24:02 crc kubenswrapper[4930]: I1124 12:24:02.643088 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"b1d415bebc3dcb6940325a2fd36f17aa3adbf53438534ff2d996acd866d5f23a"} Nov 24 12:24:02 crc kubenswrapper[4930]: I1124 12:24:02.643244 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e"} Nov 24 12:24:02 crc kubenswrapper[4930]: I1124 12:24:02.643274 4930 scope.go:117] "RemoveContainer" containerID="9b35d5fa3eb364268da5b5e0253eae62e65a2c6dfd8d0e613fb3c92e7e1d100d" Nov 24 12:24:41 crc kubenswrapper[4930]: I1124 12:24:41.993571 4930 generic.go:334] "Generic (PLEG): container finished" podID="c3f7af8b-b5d0-4361-ada0-42f01955a7d5" containerID="229cf5d551fa6d4bf0c4353f2bbf32b6902253eccf102fe0415c7accaa2ca281" exitCode=0 Nov 24 12:24:41 crc kubenswrapper[4930]: I1124 12:24:41.993710 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" event={"ID":"c3f7af8b-b5d0-4361-ada0-42f01955a7d5","Type":"ContainerDied","Data":"229cf5d551fa6d4bf0c4353f2bbf32b6902253eccf102fe0415c7accaa2ca281"} Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.404349 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.555848 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-bootstrap-combined-ca-bundle\") pod \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.555934 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbkzc\" (UniqueName: \"kubernetes.io/projected/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-kube-api-access-nbkzc\") pod \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.556019 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-inventory\") pod \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.556060 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-ssh-key\") pod \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\" (UID: \"c3f7af8b-b5d0-4361-ada0-42f01955a7d5\") " Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.577627 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c3f7af8b-b5d0-4361-ada0-42f01955a7d5" (UID: "c3f7af8b-b5d0-4361-ada0-42f01955a7d5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.590065 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-kube-api-access-nbkzc" (OuterVolumeSpecName: "kube-api-access-nbkzc") pod "c3f7af8b-b5d0-4361-ada0-42f01955a7d5" (UID: "c3f7af8b-b5d0-4361-ada0-42f01955a7d5"). InnerVolumeSpecName "kube-api-access-nbkzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.601214 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-inventory" (OuterVolumeSpecName: "inventory") pod "c3f7af8b-b5d0-4361-ada0-42f01955a7d5" (UID: "c3f7af8b-b5d0-4361-ada0-42f01955a7d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.608817 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c3f7af8b-b5d0-4361-ada0-42f01955a7d5" (UID: "c3f7af8b-b5d0-4361-ada0-42f01955a7d5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.658716 4930 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.658756 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbkzc\" (UniqueName: \"kubernetes.io/projected/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-kube-api-access-nbkzc\") on node \"crc\" DevicePath \"\"" Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.658766 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:24:43 crc kubenswrapper[4930]: I1124 12:24:43.658776 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3f7af8b-b5d0-4361-ada0-42f01955a7d5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.012454 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" event={"ID":"c3f7af8b-b5d0-4361-ada0-42f01955a7d5","Type":"ContainerDied","Data":"0a642af9ce9a70896375b997632a151439b993d9dffa435b0bfec388f43e8c85"} Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.012676 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a642af9ce9a70896375b997632a151439b993d9dffa435b0bfec388f43e8c85" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.012777 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.100737 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt"] Nov 24 12:24:44 crc kubenswrapper[4930]: E1124 12:24:44.101752 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f7af8b-b5d0-4361-ada0-42f01955a7d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.101843 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f7af8b-b5d0-4361-ada0-42f01955a7d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.102101 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f7af8b-b5d0-4361-ada0-42f01955a7d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.102787 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.106841 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.107278 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.107442 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.107623 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.113769 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt"] Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.168512 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.168637 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbj6q\" (UniqueName: \"kubernetes.io/projected/94e8669b-69a8-41fb-ab05-d2e913495e16-kube-api-access-vbj6q\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.168695 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.270914 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.270998 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbj6q\" (UniqueName: \"kubernetes.io/projected/94e8669b-69a8-41fb-ab05-d2e913495e16-kube-api-access-vbj6q\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.271034 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.276158 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.278142 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.287996 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbj6q\" (UniqueName: \"kubernetes.io/projected/94e8669b-69a8-41fb-ab05-d2e913495e16-kube-api-access-vbj6q\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.432327 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:24:44 crc kubenswrapper[4930]: I1124 12:24:44.970738 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt"] Nov 24 12:24:45 crc kubenswrapper[4930]: I1124 12:24:45.023491 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" event={"ID":"94e8669b-69a8-41fb-ab05-d2e913495e16","Type":"ContainerStarted","Data":"b624a87d006e1c5ab24f684573da8200cec299f238065569284d481b1d99327d"} Nov 24 12:24:47 crc kubenswrapper[4930]: I1124 12:24:47.045760 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" event={"ID":"94e8669b-69a8-41fb-ab05-d2e913495e16","Type":"ContainerStarted","Data":"a6e5b790e734333a42192d98a91b26e62f0e8a14f6c749b4e296b9f256838ea4"} Nov 24 12:24:47 crc kubenswrapper[4930]: I1124 12:24:47.062437 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" podStartSLOduration=1.492392395 podStartE2EDuration="3.062397515s" podCreationTimestamp="2025-11-24 12:24:44 +0000 UTC" firstStartedPulling="2025-11-24 12:24:44.980365485 +0000 UTC m=+1531.594693435" lastFinishedPulling="2025-11-24 12:24:46.550370615 +0000 UTC m=+1533.164698555" observedRunningTime="2025-11-24 12:24:47.061444037 +0000 UTC m=+1533.675771987" watchObservedRunningTime="2025-11-24 12:24:47.062397515 +0000 UTC m=+1533.676725465" Nov 24 12:25:35 crc kubenswrapper[4930]: I1124 12:25:35.039372 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e735-account-create-rgxch"] Nov 24 12:25:35 crc kubenswrapper[4930]: I1124 12:25:35.050326 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4wqjp"] Nov 24 12:25:35 crc kubenswrapper[4930]: I1124 12:25:35.069228 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4wqjp"] Nov 24 12:25:35 crc kubenswrapper[4930]: I1124 12:25:35.078963 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e735-account-create-rgxch"] Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.030433 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-k9s8c"] Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.041092 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a730-account-create-spt82"] Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.052751 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-k9s8c"] Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.061017 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-04b0-account-create-4dtrz"] Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.068475 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rrwzz"] Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.076377 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a730-account-create-spt82"] Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.095151 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30dab223-0b89-4e97-a40d-6913ffa6e8b4" path="/var/lib/kubelet/pods/30dab223-0b89-4e97-a40d-6913ffa6e8b4/volumes" Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.095730 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="545768db-9e2f-48e9-92a8-7eaa401eb0b0" path="/var/lib/kubelet/pods/545768db-9e2f-48e9-92a8-7eaa401eb0b0/volumes" Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.096257 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b0739a-ce35-40bb-929e-38d59642bd43" path="/var/lib/kubelet/pods/c8b0739a-ce35-40bb-929e-38d59642bd43/volumes" Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.096775 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f798dc87-b597-474d-a8f3-5a46781865cd" path="/var/lib/kubelet/pods/f798dc87-b597-474d-a8f3-5a46781865cd/volumes" Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.097723 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rrwzz"] Nov 24 12:25:36 crc kubenswrapper[4930]: I1124 12:25:36.097753 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-04b0-account-create-4dtrz"] Nov 24 12:25:38 crc kubenswrapper[4930]: I1124 12:25:38.095937 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec383ee-4477-4b17-be08-b1bdcea73a7f" path="/var/lib/kubelet/pods/2ec383ee-4477-4b17-be08-b1bdcea73a7f/volumes" Nov 24 12:25:38 crc kubenswrapper[4930]: I1124 12:25:38.097283 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c9eec6-bdfe-4456-a0ca-37c205ac5cba" path="/var/lib/kubelet/pods/73c9eec6-bdfe-4456-a0ca-37c205ac5cba/volumes" Nov 24 12:25:59 crc kubenswrapper[4930]: I1124 12:25:59.033695 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4jrg8"] Nov 24 12:25:59 crc kubenswrapper[4930]: I1124 12:25:59.051363 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4jrg8"] Nov 24 12:26:00 crc kubenswrapper[4930]: I1124 12:26:00.097643 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f43c338-1b9c-402b-ad1b-28e4ee015c32" path="/var/lib/kubelet/pods/1f43c338-1b9c-402b-ad1b-28e4ee015c32/volumes" Nov 24 12:26:14 crc kubenswrapper[4930]: I1124 12:26:14.047662 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hj97v"] Nov 24 12:26:14 crc kubenswrapper[4930]: I1124 12:26:14.054121 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-c4nnz"] Nov 24 12:26:14 crc kubenswrapper[4930]: I1124 12:26:14.061232 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7748-account-create-np47j"] Nov 24 12:26:14 crc kubenswrapper[4930]: I1124 12:26:14.069229 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hj97v"] Nov 24 12:26:14 crc kubenswrapper[4930]: I1124 12:26:14.078658 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3771-account-create-2h9v6"] Nov 24 12:26:14 crc kubenswrapper[4930]: I1124 12:26:14.121957 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8f1e7c-7332-451d-90b2-c437bdf80712" path="/var/lib/kubelet/pods/eb8f1e7c-7332-451d-90b2-c437bdf80712/volumes" Nov 24 12:26:14 crc kubenswrapper[4930]: I1124 12:26:14.122972 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-c4nnz"] Nov 24 12:26:14 crc kubenswrapper[4930]: I1124 12:26:14.123013 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3771-account-create-2h9v6"] Nov 24 12:26:14 crc kubenswrapper[4930]: I1124 12:26:14.125778 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7748-account-create-np47j"] Nov 24 12:26:15 crc kubenswrapper[4930]: I1124 12:26:15.890246 4930 generic.go:334] "Generic (PLEG): container finished" podID="94e8669b-69a8-41fb-ab05-d2e913495e16" containerID="a6e5b790e734333a42192d98a91b26e62f0e8a14f6c749b4e296b9f256838ea4" exitCode=0 Nov 24 12:26:15 crc kubenswrapper[4930]: I1124 12:26:15.890367 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" event={"ID":"94e8669b-69a8-41fb-ab05-d2e913495e16","Type":"ContainerDied","Data":"a6e5b790e734333a42192d98a91b26e62f0e8a14f6c749b4e296b9f256838ea4"} Nov 24 12:26:16 crc kubenswrapper[4930]: I1124 12:26:16.101117 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24" path="/var/lib/kubelet/pods/7e36bea9-4d7c-4bc5-bc05-aaddf9cd3e24/volumes" Nov 24 12:26:16 crc kubenswrapper[4930]: I1124 12:26:16.104016 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0249865-90c2-41a0-9a76-54b0fa149773" path="/var/lib/kubelet/pods/a0249865-90c2-41a0-9a76-54b0fa149773/volumes" Nov 24 12:26:16 crc kubenswrapper[4930]: I1124 12:26:16.105247 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6f40db6-9e11-4862-8b25-286a96f9b180" path="/var/lib/kubelet/pods/c6f40db6-9e11-4862-8b25-286a96f9b180/volumes" Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.035002 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7mkzz"] Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.048286 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a15c-account-create-f8snf"] Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.059961 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7mkzz"] Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.073604 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a15c-account-create-f8snf"] Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.295062 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.453670 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-inventory\") pod \"94e8669b-69a8-41fb-ab05-d2e913495e16\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.453778 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbj6q\" (UniqueName: \"kubernetes.io/projected/94e8669b-69a8-41fb-ab05-d2e913495e16-kube-api-access-vbj6q\") pod \"94e8669b-69a8-41fb-ab05-d2e913495e16\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.453966 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-ssh-key\") pod \"94e8669b-69a8-41fb-ab05-d2e913495e16\" (UID: \"94e8669b-69a8-41fb-ab05-d2e913495e16\") " Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.466692 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e8669b-69a8-41fb-ab05-d2e913495e16-kube-api-access-vbj6q" (OuterVolumeSpecName: "kube-api-access-vbj6q") pod "94e8669b-69a8-41fb-ab05-d2e913495e16" (UID: "94e8669b-69a8-41fb-ab05-d2e913495e16"). InnerVolumeSpecName "kube-api-access-vbj6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.485641 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "94e8669b-69a8-41fb-ab05-d2e913495e16" (UID: "94e8669b-69a8-41fb-ab05-d2e913495e16"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.489132 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-inventory" (OuterVolumeSpecName: "inventory") pod "94e8669b-69a8-41fb-ab05-d2e913495e16" (UID: "94e8669b-69a8-41fb-ab05-d2e913495e16"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.557407 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.557453 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbj6q\" (UniqueName: \"kubernetes.io/projected/94e8669b-69a8-41fb-ab05-d2e913495e16-kube-api-access-vbj6q\") on node \"crc\" DevicePath \"\"" Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.557465 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/94e8669b-69a8-41fb-ab05-d2e913495e16-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.924281 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" event={"ID":"94e8669b-69a8-41fb-ab05-d2e913495e16","Type":"ContainerDied","Data":"b624a87d006e1c5ab24f684573da8200cec299f238065569284d481b1d99327d"} Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.924324 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b624a87d006e1c5ab24f684573da8200cec299f238065569284d481b1d99327d" Nov 24 12:26:17 crc kubenswrapper[4930]: I1124 12:26:17.924411 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.000429 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj"] Nov 24 12:26:18 crc kubenswrapper[4930]: E1124 12:26:18.002166 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e8669b-69a8-41fb-ab05-d2e913495e16" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.002668 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e8669b-69a8-41fb-ab05-d2e913495e16" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.003067 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e8669b-69a8-41fb-ab05-d2e913495e16" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.004148 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.006147 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.007000 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.007232 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.007472 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.009289 4930 scope.go:117] "RemoveContainer" containerID="140a36fa0161f5c54adac070017088ccac6d36708059104c64c420984c39628a" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.011625 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj"] Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.042819 4930 scope.go:117] "RemoveContainer" containerID="1aa1d637426fa4174d93d26a94a08a6edd24928ef5c5bb1fe1a4755c515aee76" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.088104 4930 scope.go:117] "RemoveContainer" containerID="671a7abe5251b85867ed9cf8e414f61712079a702c095b47297f47205229c56e" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.098254 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="643713cf-450a-4539-a94c-29718af0f1bd" path="/var/lib/kubelet/pods/643713cf-450a-4539-a94c-29718af0f1bd/volumes" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.099347 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d18617d-a48f-421a-b109-9bc576b4fb8f" path="/var/lib/kubelet/pods/7d18617d-a48f-421a-b109-9bc576b4fb8f/volumes" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.116122 4930 scope.go:117] "RemoveContainer" containerID="b2836654a2839f1564576da6434cc885ecbe54859abe20c0d5483aa8d36d466b" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.140978 4930 scope.go:117] "RemoveContainer" containerID="21a0ca965c71dcf79238f478e5da9fb34749019005cecbb11d72f6fe66ebf76c" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.159920 4930 scope.go:117] "RemoveContainer" containerID="a9468c8d2bf2591e15284940d9bee2701f1fabd0e331e198b29500afdd1677fc" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.169461 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9q6j\" (UniqueName: \"kubernetes.io/projected/2e059ba1-d1de-4764-afd1-50b78af12ce8-kube-api-access-c9q6j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.169561 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.169584 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.178156 4930 scope.go:117] "RemoveContainer" containerID="e9d4e9596371f60129b9c619833e145b0af4900738eb90a358bd58b1a1a004d8" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.196277 4930 scope.go:117] "RemoveContainer" containerID="2a28da84a9b2baf8217c965579387afe450bbde92845fee47e64a7d7cba400c7" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.254378 4930 scope.go:117] "RemoveContainer" containerID="63ab48553dc6e035e615f1745def12f81e794331a4b2bed7e0ca19e4596f8ab6" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.271350 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9q6j\" (UniqueName: \"kubernetes.io/projected/2e059ba1-d1de-4764-afd1-50b78af12ce8-kube-api-access-c9q6j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.271408 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.271467 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.274909 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.275282 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.289778 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9q6j\" (UniqueName: \"kubernetes.io/projected/2e059ba1-d1de-4764-afd1-50b78af12ce8-kube-api-access-c9q6j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.372510 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.372601 4930 scope.go:117] "RemoveContainer" containerID="6d38a8e0a3b04e2ea523e18a81834ab9fadccf8507fe435f13b6a9a2eabac9e9" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.418210 4930 scope.go:117] "RemoveContainer" containerID="5cc1bf97adcf98375330d1f214d7f21918443958fd3e279a528f0f410ac10916" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.439434 4930 scope.go:117] "RemoveContainer" containerID="df753abe40f242dfd100eb9e49188ae94fac33ec2580c4494207b8afa715642f" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.487084 4930 scope.go:117] "RemoveContainer" containerID="3a9365d71f24490e13d4e3c7913ba2b134de5a8b9d8243783cbacf96132704b0" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.509907 4930 scope.go:117] "RemoveContainer" containerID="ac55d35b8510a314eaf9e9bd2d6aa0b3175d4425e4bbf9e02bf9730df6b5d315" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.533418 4930 scope.go:117] "RemoveContainer" containerID="64e59a1906323723c9d02214b6dbe080d7104df19eea71f3ba917c849c99ea78" Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.871340 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj"] Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.878961 4930 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:26:18 crc kubenswrapper[4930]: I1124 12:26:18.932783 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" event={"ID":"2e059ba1-d1de-4764-afd1-50b78af12ce8","Type":"ContainerStarted","Data":"464238be8085196b4c925625433e61881fbc8b12d515021b71d12a5ae6d6c24c"} Nov 24 12:26:19 crc kubenswrapper[4930]: I1124 12:26:19.962245 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" event={"ID":"2e059ba1-d1de-4764-afd1-50b78af12ce8","Type":"ContainerStarted","Data":"50ca7c0402941aa3a1c1c69c58baf0f6562ab83598c81c877614b20c1d9825c3"} Nov 24 12:26:20 crc kubenswrapper[4930]: I1124 12:26:20.004526 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" podStartSLOduration=2.603781736 podStartE2EDuration="3.004507031s" podCreationTimestamp="2025-11-24 12:26:17 +0000 UTC" firstStartedPulling="2025-11-24 12:26:18.878695742 +0000 UTC m=+1625.493023692" lastFinishedPulling="2025-11-24 12:26:19.279421037 +0000 UTC m=+1625.893748987" observedRunningTime="2025-11-24 12:26:19.98715861 +0000 UTC m=+1626.601486560" watchObservedRunningTime="2025-11-24 12:26:20.004507031 +0000 UTC m=+1626.618834981" Nov 24 12:26:21 crc kubenswrapper[4930]: I1124 12:26:21.046932 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rll74"] Nov 24 12:26:21 crc kubenswrapper[4930]: I1124 12:26:21.061454 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rll74"] Nov 24 12:26:22 crc kubenswrapper[4930]: I1124 12:26:22.094280 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a24f8e38-6022-4f62-b5c5-4d42d7cd140c" path="/var/lib/kubelet/pods/a24f8e38-6022-4f62-b5c5-4d42d7cd140c/volumes" Nov 24 12:26:31 crc kubenswrapper[4930]: I1124 12:26:31.808858 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:26:31 crc kubenswrapper[4930]: I1124 12:26:31.809465 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:26:51 crc kubenswrapper[4930]: I1124 12:26:51.044046 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zfb9w"] Nov 24 12:26:51 crc kubenswrapper[4930]: I1124 12:26:51.053803 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zfb9w"] Nov 24 12:26:52 crc kubenswrapper[4930]: I1124 12:26:52.096630 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3933d228-dcc6-4ce9-97ff-17a5a2d49d0f" path="/var/lib/kubelet/pods/3933d228-dcc6-4ce9-97ff-17a5a2d49d0f/volumes" Nov 24 12:26:58 crc kubenswrapper[4930]: I1124 12:26:58.039731 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-drnpc"] Nov 24 12:26:58 crc kubenswrapper[4930]: I1124 12:26:58.050362 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-drnpc"] Nov 24 12:26:58 crc kubenswrapper[4930]: I1124 12:26:58.096797 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35d6db25-381f-4f83-a033-984addf8da0d" path="/var/lib/kubelet/pods/35d6db25-381f-4f83-a033-984addf8da0d/volumes" Nov 24 12:27:00 crc kubenswrapper[4930]: I1124 12:27:00.032180 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5nhh8"] Nov 24 12:27:00 crc kubenswrapper[4930]: I1124 12:27:00.040467 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5nhh8"] Nov 24 12:27:00 crc kubenswrapper[4930]: I1124 12:27:00.096375 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f75b82b-237c-4bcd-9bd4-8e72a43204aa" path="/var/lib/kubelet/pods/8f75b82b-237c-4bcd-9bd4-8e72a43204aa/volumes" Nov 24 12:27:01 crc kubenswrapper[4930]: I1124 12:27:01.809171 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:27:01 crc kubenswrapper[4930]: I1124 12:27:01.810037 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:27:03 crc kubenswrapper[4930]: I1124 12:27:03.060594 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-pzqxp"] Nov 24 12:27:03 crc kubenswrapper[4930]: I1124 12:27:03.071987 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-pzqxp"] Nov 24 12:27:04 crc kubenswrapper[4930]: I1124 12:27:04.101260 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44fb1f8c-0796-4310-b053-8222837cfbf2" path="/var/lib/kubelet/pods/44fb1f8c-0796-4310-b053-8222837cfbf2/volumes" Nov 24 12:27:18 crc kubenswrapper[4930]: I1124 12:27:18.779529 4930 scope.go:117] "RemoveContainer" containerID="4a44ef7b109c6ccd359bbdb8ed3e9bf626ae274ff45ade5597326ee672520e40" Nov 24 12:27:18 crc kubenswrapper[4930]: I1124 12:27:18.842488 4930 scope.go:117] "RemoveContainer" containerID="a59d7eb3f75edf836d5beb89b44d3608b0974951449769906a5008b201d810b2" Nov 24 12:27:18 crc kubenswrapper[4930]: I1124 12:27:18.885614 4930 scope.go:117] "RemoveContainer" containerID="18dbb96e3bc811431ec23dfee6de196b683ceec9debc3669650b9c188ec25d59" Nov 24 12:27:18 crc kubenswrapper[4930]: I1124 12:27:18.935930 4930 scope.go:117] "RemoveContainer" containerID="238acbf81621feccb913f177fdbc5cc93e7434423a7af655da8b2ef55e2d92f9" Nov 24 12:27:18 crc kubenswrapper[4930]: I1124 12:27:18.987021 4930 scope.go:117] "RemoveContainer" containerID="682b3369b40866db1c4d5dd05390f9c51695077c1147c17303a5724fff51c51c" Nov 24 12:27:22 crc kubenswrapper[4930]: I1124 12:27:22.038079 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rcfd6"] Nov 24 12:27:22 crc kubenswrapper[4930]: I1124 12:27:22.051559 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rcfd6"] Nov 24 12:27:22 crc kubenswrapper[4930]: I1124 12:27:22.093532 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9169db1f-c94f-45a3-bc97-6ad40d17b7d1" path="/var/lib/kubelet/pods/9169db1f-c94f-45a3-bc97-6ad40d17b7d1/volumes" Nov 24 12:27:27 crc kubenswrapper[4930]: I1124 12:27:27.606779 4930 generic.go:334] "Generic (PLEG): container finished" podID="2e059ba1-d1de-4764-afd1-50b78af12ce8" containerID="50ca7c0402941aa3a1c1c69c58baf0f6562ab83598c81c877614b20c1d9825c3" exitCode=0 Nov 24 12:27:27 crc kubenswrapper[4930]: I1124 12:27:27.606874 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" event={"ID":"2e059ba1-d1de-4764-afd1-50b78af12ce8","Type":"ContainerDied","Data":"50ca7c0402941aa3a1c1c69c58baf0f6562ab83598c81c877614b20c1d9825c3"} Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.003954 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.070250 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-inventory\") pod \"2e059ba1-d1de-4764-afd1-50b78af12ce8\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.070470 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9q6j\" (UniqueName: \"kubernetes.io/projected/2e059ba1-d1de-4764-afd1-50b78af12ce8-kube-api-access-c9q6j\") pod \"2e059ba1-d1de-4764-afd1-50b78af12ce8\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.070589 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-ssh-key\") pod \"2e059ba1-d1de-4764-afd1-50b78af12ce8\" (UID: \"2e059ba1-d1de-4764-afd1-50b78af12ce8\") " Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.079178 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e059ba1-d1de-4764-afd1-50b78af12ce8-kube-api-access-c9q6j" (OuterVolumeSpecName: "kube-api-access-c9q6j") pod "2e059ba1-d1de-4764-afd1-50b78af12ce8" (UID: "2e059ba1-d1de-4764-afd1-50b78af12ce8"). InnerVolumeSpecName "kube-api-access-c9q6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.099735 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2e059ba1-d1de-4764-afd1-50b78af12ce8" (UID: "2e059ba1-d1de-4764-afd1-50b78af12ce8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.109911 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-inventory" (OuterVolumeSpecName: "inventory") pod "2e059ba1-d1de-4764-afd1-50b78af12ce8" (UID: "2e059ba1-d1de-4764-afd1-50b78af12ce8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.173303 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.173639 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9q6j\" (UniqueName: \"kubernetes.io/projected/2e059ba1-d1de-4764-afd1-50b78af12ce8-kube-api-access-c9q6j\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.173711 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e059ba1-d1de-4764-afd1-50b78af12ce8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.628779 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" event={"ID":"2e059ba1-d1de-4764-afd1-50b78af12ce8","Type":"ContainerDied","Data":"464238be8085196b4c925625433e61881fbc8b12d515021b71d12a5ae6d6c24c"} Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.629151 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464238be8085196b4c925625433e61881fbc8b12d515021b71d12a5ae6d6c24c" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.629238 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.728755 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv"] Nov 24 12:27:29 crc kubenswrapper[4930]: E1124 12:27:29.729190 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e059ba1-d1de-4764-afd1-50b78af12ce8" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.729213 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e059ba1-d1de-4764-afd1-50b78af12ce8" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.729509 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e059ba1-d1de-4764-afd1-50b78af12ce8" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.730257 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.733964 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.734224 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.734582 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.734924 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.746709 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv"] Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.886821 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.886919 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.887016 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cpz5\" (UniqueName: \"kubernetes.io/projected/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-kube-api-access-8cpz5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.989177 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cpz5\" (UniqueName: \"kubernetes.io/projected/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-kube-api-access-8cpz5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.989301 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:29 crc kubenswrapper[4930]: I1124 12:27:29.989402 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:30 crc kubenswrapper[4930]: I1124 12:27:30.009858 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:30 crc kubenswrapper[4930]: I1124 12:27:30.009879 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:30 crc kubenswrapper[4930]: I1124 12:27:30.028656 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cpz5\" (UniqueName: \"kubernetes.io/projected/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-kube-api-access-8cpz5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:30 crc kubenswrapper[4930]: I1124 12:27:30.050816 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:30 crc kubenswrapper[4930]: I1124 12:27:30.543488 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv"] Nov 24 12:27:30 crc kubenswrapper[4930]: I1124 12:27:30.639481 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" event={"ID":"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301","Type":"ContainerStarted","Data":"42b68cf4826b16499b77344f9ea9de9245d08a36c9b5a7f7decacd1d4e5bbb9c"} Nov 24 12:27:31 crc kubenswrapper[4930]: I1124 12:27:31.648983 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" event={"ID":"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301","Type":"ContainerStarted","Data":"d594dbac9a4a04078beaecf4b58576869e086ef226b1084c855edf7c8ce6df06"} Nov 24 12:27:31 crc kubenswrapper[4930]: I1124 12:27:31.809940 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:27:31 crc kubenswrapper[4930]: I1124 12:27:31.810009 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:27:31 crc kubenswrapper[4930]: I1124 12:27:31.810063 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:27:31 crc kubenswrapper[4930]: I1124 12:27:31.810859 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:27:31 crc kubenswrapper[4930]: I1124 12:27:31.810930 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" gracePeriod=600 Nov 24 12:27:31 crc kubenswrapper[4930]: E1124 12:27:31.938033 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:27:32 crc kubenswrapper[4930]: I1124 12:27:32.659594 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" exitCode=0 Nov 24 12:27:32 crc kubenswrapper[4930]: I1124 12:27:32.659703 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e"} Nov 24 12:27:32 crc kubenswrapper[4930]: I1124 12:27:32.660076 4930 scope.go:117] "RemoveContainer" containerID="b1d415bebc3dcb6940325a2fd36f17aa3adbf53438534ff2d996acd866d5f23a" Nov 24 12:27:32 crc kubenswrapper[4930]: I1124 12:27:32.660947 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:27:32 crc kubenswrapper[4930]: E1124 12:27:32.661389 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:27:32 crc kubenswrapper[4930]: I1124 12:27:32.697217 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" podStartSLOduration=3.263433357 podStartE2EDuration="3.697198316s" podCreationTimestamp="2025-11-24 12:27:29 +0000 UTC" firstStartedPulling="2025-11-24 12:27:30.547953075 +0000 UTC m=+1697.162281025" lastFinishedPulling="2025-11-24 12:27:30.981718034 +0000 UTC m=+1697.596045984" observedRunningTime="2025-11-24 12:27:31.670288513 +0000 UTC m=+1698.284616463" watchObservedRunningTime="2025-11-24 12:27:32.697198316 +0000 UTC m=+1699.311526266" Nov 24 12:27:36 crc kubenswrapper[4930]: I1124 12:27:36.717702 4930 generic.go:334] "Generic (PLEG): container finished" podID="4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301" containerID="d594dbac9a4a04078beaecf4b58576869e086ef226b1084c855edf7c8ce6df06" exitCode=0 Nov 24 12:27:36 crc kubenswrapper[4930]: I1124 12:27:36.717799 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" event={"ID":"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301","Type":"ContainerDied","Data":"d594dbac9a4a04078beaecf4b58576869e086ef226b1084c855edf7c8ce6df06"} Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.196861 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.246800 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-inventory\") pod \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.246849 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-ssh-key\") pod \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.246944 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cpz5\" (UniqueName: \"kubernetes.io/projected/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-kube-api-access-8cpz5\") pod \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\" (UID: \"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301\") " Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.252670 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-kube-api-access-8cpz5" (OuterVolumeSpecName: "kube-api-access-8cpz5") pod "4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301" (UID: "4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301"). InnerVolumeSpecName "kube-api-access-8cpz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.273742 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-inventory" (OuterVolumeSpecName: "inventory") pod "4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301" (UID: "4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.289943 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301" (UID: "4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.348968 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.348998 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.349010 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cpz5\" (UniqueName: \"kubernetes.io/projected/4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301-kube-api-access-8cpz5\") on node \"crc\" DevicePath \"\"" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.766592 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" event={"ID":"4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301","Type":"ContainerDied","Data":"42b68cf4826b16499b77344f9ea9de9245d08a36c9b5a7f7decacd1d4e5bbb9c"} Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.766870 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b68cf4826b16499b77344f9ea9de9245d08a36c9b5a7f7decacd1d4e5bbb9c" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.766675 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.818757 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57"] Nov 24 12:27:38 crc kubenswrapper[4930]: E1124 12:27:38.819200 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.819220 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.819408 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.820022 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.822844 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.823202 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.823277 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.823532 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.837016 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57"] Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.861323 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xrq57\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.861383 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xrq57\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.861418 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xlg\" (UniqueName: \"kubernetes.io/projected/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-kube-api-access-x2xlg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xrq57\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.963252 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xrq57\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.963353 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xrq57\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.963374 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2xlg\" (UniqueName: \"kubernetes.io/projected/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-kube-api-access-x2xlg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xrq57\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.968318 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xrq57\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.973075 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xrq57\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:38 crc kubenswrapper[4930]: I1124 12:27:38.979174 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2xlg\" (UniqueName: \"kubernetes.io/projected/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-kube-api-access-x2xlg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xrq57\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:39 crc kubenswrapper[4930]: I1124 12:27:39.173761 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:27:39 crc kubenswrapper[4930]: I1124 12:27:39.683333 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57"] Nov 24 12:27:39 crc kubenswrapper[4930]: I1124 12:27:39.775817 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" event={"ID":"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9","Type":"ContainerStarted","Data":"33475d1238c8705355355a4292924570de05a154fd448daa53c601aa8917c0cd"} Nov 24 12:27:40 crc kubenswrapper[4930]: I1124 12:27:40.785395 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" event={"ID":"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9","Type":"ContainerStarted","Data":"681f71ea85ee669a24febcd8e03bb9e04e1e9d3ad3a96b728b6892589aa02ea0"} Nov 24 12:27:40 crc kubenswrapper[4930]: I1124 12:27:40.804745 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" podStartSLOduration=2.374036993 podStartE2EDuration="2.804727394s" podCreationTimestamp="2025-11-24 12:27:38 +0000 UTC" firstStartedPulling="2025-11-24 12:27:39.686989348 +0000 UTC m=+1706.301317288" lastFinishedPulling="2025-11-24 12:27:40.117679739 +0000 UTC m=+1706.732007689" observedRunningTime="2025-11-24 12:27:40.803606212 +0000 UTC m=+1707.417934162" watchObservedRunningTime="2025-11-24 12:27:40.804727394 +0000 UTC m=+1707.419055344" Nov 24 12:27:46 crc kubenswrapper[4930]: I1124 12:27:46.084797 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:27:46 crc kubenswrapper[4930]: E1124 12:27:46.085877 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:27:58 crc kubenswrapper[4930]: I1124 12:27:58.044645 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vx4m5"] Nov 24 12:27:58 crc kubenswrapper[4930]: I1124 12:27:58.060798 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vx4m5"] Nov 24 12:27:58 crc kubenswrapper[4930]: I1124 12:27:58.069309 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-d47hr"] Nov 24 12:27:58 crc kubenswrapper[4930]: I1124 12:27:58.076892 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-d47hr"] Nov 24 12:27:58 crc kubenswrapper[4930]: I1124 12:27:58.095692 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d0ae65-ed30-465d-a12a-65394f309c5a" path="/var/lib/kubelet/pods/25d0ae65-ed30-465d-a12a-65394f309c5a/volumes" Nov 24 12:27:58 crc kubenswrapper[4930]: I1124 12:27:58.096584 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a37265e-62d9-4ebc-9793-aed961e89590" path="/var/lib/kubelet/pods/8a37265e-62d9-4ebc-9793-aed961e89590/volumes" Nov 24 12:27:59 crc kubenswrapper[4930]: I1124 12:27:59.031961 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5302-account-create-kthkc"] Nov 24 12:27:59 crc kubenswrapper[4930]: I1124 12:27:59.044970 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-16c3-account-create-589jb"] Nov 24 12:27:59 crc kubenswrapper[4930]: I1124 12:27:59.053942 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4hgkg"] Nov 24 12:27:59 crc kubenswrapper[4930]: I1124 12:27:59.060644 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8a17-account-create-6h9tj"] Nov 24 12:27:59 crc kubenswrapper[4930]: I1124 12:27:59.066818 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5302-account-create-kthkc"] Nov 24 12:27:59 crc kubenswrapper[4930]: I1124 12:27:59.072725 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-16c3-account-create-589jb"] Nov 24 12:27:59 crc kubenswrapper[4930]: I1124 12:27:59.078518 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4hgkg"] Nov 24 12:27:59 crc kubenswrapper[4930]: I1124 12:27:59.084386 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8a17-account-create-6h9tj"] Nov 24 12:28:00 crc kubenswrapper[4930]: I1124 12:28:00.084638 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:28:00 crc kubenswrapper[4930]: E1124 12:28:00.084895 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:28:00 crc kubenswrapper[4930]: I1124 12:28:00.094984 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="054173d4-2a3d-45a3-bb82-de2c7afc4316" path="/var/lib/kubelet/pods/054173d4-2a3d-45a3-bb82-de2c7afc4316/volumes" Nov 24 12:28:00 crc kubenswrapper[4930]: I1124 12:28:00.095815 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3ef300-b344-4fba-a285-f85430bccd47" path="/var/lib/kubelet/pods/2a3ef300-b344-4fba-a285-f85430bccd47/volumes" Nov 24 12:28:00 crc kubenswrapper[4930]: I1124 12:28:00.096651 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85937f48-37f8-4673-ad68-91c8b5f10a8e" path="/var/lib/kubelet/pods/85937f48-37f8-4673-ad68-91c8b5f10a8e/volumes" Nov 24 12:28:00 crc kubenswrapper[4930]: I1124 12:28:00.097451 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9988481-8889-4845-a558-9a3fa4f14322" path="/var/lib/kubelet/pods/a9988481-8889-4845-a558-9a3fa4f14322/volumes" Nov 24 12:28:13 crc kubenswrapper[4930]: I1124 12:28:13.085295 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:28:13 crc kubenswrapper[4930]: E1124 12:28:13.087455 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:28:16 crc kubenswrapper[4930]: I1124 12:28:16.101387 4930 generic.go:334] "Generic (PLEG): container finished" podID="3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9" containerID="681f71ea85ee669a24febcd8e03bb9e04e1e9d3ad3a96b728b6892589aa02ea0" exitCode=0 Nov 24 12:28:16 crc kubenswrapper[4930]: I1124 12:28:16.102503 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" event={"ID":"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9","Type":"ContainerDied","Data":"681f71ea85ee669a24febcd8e03bb9e04e1e9d3ad3a96b728b6892589aa02ea0"} Nov 24 12:28:17 crc kubenswrapper[4930]: I1124 12:28:17.537373 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:28:17 crc kubenswrapper[4930]: I1124 12:28:17.714114 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-inventory\") pod \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " Nov 24 12:28:17 crc kubenswrapper[4930]: I1124 12:28:17.714279 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-ssh-key\") pod \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " Nov 24 12:28:17 crc kubenswrapper[4930]: I1124 12:28:17.714324 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2xlg\" (UniqueName: \"kubernetes.io/projected/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-kube-api-access-x2xlg\") pod \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\" (UID: \"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9\") " Nov 24 12:28:17 crc kubenswrapper[4930]: I1124 12:28:17.727665 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-kube-api-access-x2xlg" (OuterVolumeSpecName: "kube-api-access-x2xlg") pod "3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9" (UID: "3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9"). InnerVolumeSpecName "kube-api-access-x2xlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:17 crc kubenswrapper[4930]: I1124 12:28:17.746502 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9" (UID: "3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:17 crc kubenswrapper[4930]: I1124 12:28:17.748400 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-inventory" (OuterVolumeSpecName: "inventory") pod "3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9" (UID: "3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:17 crc kubenswrapper[4930]: I1124 12:28:17.823645 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:17 crc kubenswrapper[4930]: I1124 12:28:17.823685 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:17 crc kubenswrapper[4930]: I1124 12:28:17.823700 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2xlg\" (UniqueName: \"kubernetes.io/projected/3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9-kube-api-access-x2xlg\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.122887 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" event={"ID":"3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9","Type":"ContainerDied","Data":"33475d1238c8705355355a4292924570de05a154fd448daa53c601aa8917c0cd"} Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.122989 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33475d1238c8705355355a4292924570de05a154fd448daa53c601aa8917c0cd" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.122990 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xrq57" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.205089 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj"] Nov 24 12:28:18 crc kubenswrapper[4930]: E1124 12:28:18.205457 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.205469 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.205678 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.206273 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.208490 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.208891 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.209018 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.214156 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.223273 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj"] Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.333746 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.334045 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.334117 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbt8\" (UniqueName: \"kubernetes.io/projected/7dab908a-df78-4c5a-945f-25221b75df7a-kube-api-access-qhbt8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.435998 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.436051 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.436111 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbt8\" (UniqueName: \"kubernetes.io/projected/7dab908a-df78-4c5a-945f-25221b75df7a-kube-api-access-qhbt8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.440864 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.449747 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.458438 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbt8\" (UniqueName: \"kubernetes.io/projected/7dab908a-df78-4c5a-945f-25221b75df7a-kube-api-access-qhbt8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.521263 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:28:18 crc kubenswrapper[4930]: W1124 12:28:18.906186 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dab908a_df78_4c5a_945f_25221b75df7a.slice/crio-83461a3c5d23ce4695338919371c7a5e34f66d1d8d8dcb1fc5df54dc502dd548 WatchSource:0}: Error finding container 83461a3c5d23ce4695338919371c7a5e34f66d1d8d8dcb1fc5df54dc502dd548: Status 404 returned error can't find the container with id 83461a3c5d23ce4695338919371c7a5e34f66d1d8d8dcb1fc5df54dc502dd548 Nov 24 12:28:18 crc kubenswrapper[4930]: I1124 12:28:18.912361 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj"] Nov 24 12:28:19 crc kubenswrapper[4930]: I1124 12:28:19.088731 4930 scope.go:117] "RemoveContainer" containerID="14561a44121a426c095fdbca63b4658747afaef6e840a09db2b64e0512cc96ce" Nov 24 12:28:19 crc kubenswrapper[4930]: I1124 12:28:19.111891 4930 scope.go:117] "RemoveContainer" containerID="2c00ab2a0f29c7f54a3a5bcbe15f2dad24fec78078a3656baa2217c748314ff0" Nov 24 12:28:19 crc kubenswrapper[4930]: I1124 12:28:19.137974 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" event={"ID":"7dab908a-df78-4c5a-945f-25221b75df7a","Type":"ContainerStarted","Data":"83461a3c5d23ce4695338919371c7a5e34f66d1d8d8dcb1fc5df54dc502dd548"} Nov 24 12:28:19 crc kubenswrapper[4930]: I1124 12:28:19.169113 4930 scope.go:117] "RemoveContainer" containerID="f840e1d54a7caffd606ed22fbeef274096c181663d935deb0122f4a5fee46fda" Nov 24 12:28:19 crc kubenswrapper[4930]: I1124 12:28:19.205251 4930 scope.go:117] "RemoveContainer" containerID="60132ca2ded6560fabec0c323b4034be0cb1dd5e85c1ca4a161d0aaf80a07014" Nov 24 12:28:19 crc kubenswrapper[4930]: I1124 12:28:19.223402 4930 scope.go:117] "RemoveContainer" containerID="fcccdb2f5d804cacd5cddd4207cdba8a67cbe037981a7270ef51b3186efd8502" Nov 24 12:28:19 crc kubenswrapper[4930]: I1124 12:28:19.245737 4930 scope.go:117] "RemoveContainer" containerID="0b1468d4445c9a3e7f6ce5bbe03a1915b9db3393bbabead6b2be554463fc2185" Nov 24 12:28:19 crc kubenswrapper[4930]: I1124 12:28:19.262888 4930 scope.go:117] "RemoveContainer" containerID="aa919e006509b95e84c3f86836308d4e51d895cadcd3e5f57a1609f95dbb352f" Nov 24 12:28:20 crc kubenswrapper[4930]: I1124 12:28:20.156236 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" event={"ID":"7dab908a-df78-4c5a-945f-25221b75df7a","Type":"ContainerStarted","Data":"ff4620c95e4be664395ca3dd07bbaeb44d8037ed331168c3fa99cceae6dcd908"} Nov 24 12:28:20 crc kubenswrapper[4930]: I1124 12:28:20.182446 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" podStartSLOduration=1.629727119 podStartE2EDuration="2.182426184s" podCreationTimestamp="2025-11-24 12:28:18 +0000 UTC" firstStartedPulling="2025-11-24 12:28:18.909020641 +0000 UTC m=+1745.523348591" lastFinishedPulling="2025-11-24 12:28:19.461719676 +0000 UTC m=+1746.076047656" observedRunningTime="2025-11-24 12:28:20.173725373 +0000 UTC m=+1746.788053383" watchObservedRunningTime="2025-11-24 12:28:20.182426184 +0000 UTC m=+1746.796754144" Nov 24 12:28:27 crc kubenswrapper[4930]: I1124 12:28:27.081904 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xj8hp"] Nov 24 12:28:27 crc kubenswrapper[4930]: I1124 12:28:27.097884 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xj8hp"] Nov 24 12:28:28 crc kubenswrapper[4930]: I1124 12:28:28.085208 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:28:28 crc kubenswrapper[4930]: E1124 12:28:28.085726 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:28:28 crc kubenswrapper[4930]: I1124 12:28:28.097959 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c592a3-0b0c-45e5-a53e-2a672e3ce388" path="/var/lib/kubelet/pods/58c592a3-0b0c-45e5-a53e-2a672e3ce388/volumes" Nov 24 12:28:43 crc kubenswrapper[4930]: I1124 12:28:43.084377 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:28:43 crc kubenswrapper[4930]: E1124 12:28:43.085147 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:28:50 crc kubenswrapper[4930]: I1124 12:28:50.042160 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ttwlm"] Nov 24 12:28:50 crc kubenswrapper[4930]: I1124 12:28:50.050838 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ttwlm"] Nov 24 12:28:50 crc kubenswrapper[4930]: I1124 12:28:50.096065 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51338fbc-fcb2-458b-9b02-8f7fec515821" path="/var/lib/kubelet/pods/51338fbc-fcb2-458b-9b02-8f7fec515821/volumes" Nov 24 12:28:52 crc kubenswrapper[4930]: I1124 12:28:52.027506 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-24swx"] Nov 24 12:28:52 crc kubenswrapper[4930]: I1124 12:28:52.036987 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-24swx"] Nov 24 12:28:52 crc kubenswrapper[4930]: I1124 12:28:52.095313 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb892b0-86ce-42f6-9c90-8acdb9a90a41" path="/var/lib/kubelet/pods/6eb892b0-86ce-42f6-9c90-8acdb9a90a41/volumes" Nov 24 12:28:56 crc kubenswrapper[4930]: I1124 12:28:56.084823 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:28:56 crc kubenswrapper[4930]: E1124 12:28:56.085452 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:29:06 crc kubenswrapper[4930]: I1124 12:29:06.589003 4930 generic.go:334] "Generic (PLEG): container finished" podID="7dab908a-df78-4c5a-945f-25221b75df7a" containerID="ff4620c95e4be664395ca3dd07bbaeb44d8037ed331168c3fa99cceae6dcd908" exitCode=0 Nov 24 12:29:06 crc kubenswrapper[4930]: I1124 12:29:06.589053 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" event={"ID":"7dab908a-df78-4c5a-945f-25221b75df7a","Type":"ContainerDied","Data":"ff4620c95e4be664395ca3dd07bbaeb44d8037ed331168c3fa99cceae6dcd908"} Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.006278 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.207623 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhbt8\" (UniqueName: \"kubernetes.io/projected/7dab908a-df78-4c5a-945f-25221b75df7a-kube-api-access-qhbt8\") pod \"7dab908a-df78-4c5a-945f-25221b75df7a\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.207701 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-ssh-key\") pod \"7dab908a-df78-4c5a-945f-25221b75df7a\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.207758 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-inventory\") pod \"7dab908a-df78-4c5a-945f-25221b75df7a\" (UID: \"7dab908a-df78-4c5a-945f-25221b75df7a\") " Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.214315 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dab908a-df78-4c5a-945f-25221b75df7a-kube-api-access-qhbt8" (OuterVolumeSpecName: "kube-api-access-qhbt8") pod "7dab908a-df78-4c5a-945f-25221b75df7a" (UID: "7dab908a-df78-4c5a-945f-25221b75df7a"). InnerVolumeSpecName "kube-api-access-qhbt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.237359 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-inventory" (OuterVolumeSpecName: "inventory") pod "7dab908a-df78-4c5a-945f-25221b75df7a" (UID: "7dab908a-df78-4c5a-945f-25221b75df7a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.238140 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7dab908a-df78-4c5a-945f-25221b75df7a" (UID: "7dab908a-df78-4c5a-945f-25221b75df7a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.309856 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhbt8\" (UniqueName: \"kubernetes.io/projected/7dab908a-df78-4c5a-945f-25221b75df7a-kube-api-access-qhbt8\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.310120 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.310201 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dab908a-df78-4c5a-945f-25221b75df7a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.604281 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" event={"ID":"7dab908a-df78-4c5a-945f-25221b75df7a","Type":"ContainerDied","Data":"83461a3c5d23ce4695338919371c7a5e34f66d1d8d8dcb1fc5df54dc502dd548"} Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.604320 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83461a3c5d23ce4695338919371c7a5e34f66d1d8d8dcb1fc5df54dc502dd548" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.604343 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.698857 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gglqx"] Nov 24 12:29:08 crc kubenswrapper[4930]: E1124 12:29:08.699365 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dab908a-df78-4c5a-945f-25221b75df7a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.699389 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dab908a-df78-4c5a-945f-25221b75df7a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.699645 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dab908a-df78-4c5a-945f-25221b75df7a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.700458 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.702604 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.702711 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.703021 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.704523 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.716260 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7v6v\" (UniqueName: \"kubernetes.io/projected/fea938c9-2678-4985-bbe3-8f15d9a3302b-kube-api-access-v7v6v\") pod \"ssh-known-hosts-edpm-deployment-gglqx\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.716426 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gglqx\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.716464 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gglqx\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.716777 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gglqx"] Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.817912 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gglqx\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.818273 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gglqx\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.818363 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7v6v\" (UniqueName: \"kubernetes.io/projected/fea938c9-2678-4985-bbe3-8f15d9a3302b-kube-api-access-v7v6v\") pod \"ssh-known-hosts-edpm-deployment-gglqx\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.822521 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gglqx\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.823469 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gglqx\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:08 crc kubenswrapper[4930]: I1124 12:29:08.836765 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7v6v\" (UniqueName: \"kubernetes.io/projected/fea938c9-2678-4985-bbe3-8f15d9a3302b-kube-api-access-v7v6v\") pod \"ssh-known-hosts-edpm-deployment-gglqx\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:09 crc kubenswrapper[4930]: I1124 12:29:09.018830 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:09 crc kubenswrapper[4930]: I1124 12:29:09.085751 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:29:09 crc kubenswrapper[4930]: E1124 12:29:09.086118 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:29:09 crc kubenswrapper[4930]: I1124 12:29:09.584489 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gglqx"] Nov 24 12:29:09 crc kubenswrapper[4930]: I1124 12:29:09.615923 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" event={"ID":"fea938c9-2678-4985-bbe3-8f15d9a3302b","Type":"ContainerStarted","Data":"3b2ce5238f89423d53ad4c8c97b0258087166aaeae002fb9a062df1dcf93af0f"} Nov 24 12:29:10 crc kubenswrapper[4930]: I1124 12:29:10.626095 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" event={"ID":"fea938c9-2678-4985-bbe3-8f15d9a3302b","Type":"ContainerStarted","Data":"346a65e0882a79785c8affa7e4e2de2f6772caeca91052541181b5204a6cfb98"} Nov 24 12:29:10 crc kubenswrapper[4930]: I1124 12:29:10.648331 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" podStartSLOduration=2.176632473 podStartE2EDuration="2.648312368s" podCreationTimestamp="2025-11-24 12:29:08 +0000 UTC" firstStartedPulling="2025-11-24 12:29:09.587681031 +0000 UTC m=+1796.202008981" lastFinishedPulling="2025-11-24 12:29:10.059360926 +0000 UTC m=+1796.673688876" observedRunningTime="2025-11-24 12:29:10.647138054 +0000 UTC m=+1797.261466004" watchObservedRunningTime="2025-11-24 12:29:10.648312368 +0000 UTC m=+1797.262640328" Nov 24 12:29:17 crc kubenswrapper[4930]: I1124 12:29:17.688059 4930 generic.go:334] "Generic (PLEG): container finished" podID="fea938c9-2678-4985-bbe3-8f15d9a3302b" containerID="346a65e0882a79785c8affa7e4e2de2f6772caeca91052541181b5204a6cfb98" exitCode=0 Nov 24 12:29:17 crc kubenswrapper[4930]: I1124 12:29:17.688115 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" event={"ID":"fea938c9-2678-4985-bbe3-8f15d9a3302b","Type":"ContainerDied","Data":"346a65e0882a79785c8affa7e4e2de2f6772caeca91052541181b5204a6cfb98"} Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.171017 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.332857 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-inventory-0\") pod \"fea938c9-2678-4985-bbe3-8f15d9a3302b\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.333118 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7v6v\" (UniqueName: \"kubernetes.io/projected/fea938c9-2678-4985-bbe3-8f15d9a3302b-kube-api-access-v7v6v\") pod \"fea938c9-2678-4985-bbe3-8f15d9a3302b\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.333200 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-ssh-key-openstack-edpm-ipam\") pod \"fea938c9-2678-4985-bbe3-8f15d9a3302b\" (UID: \"fea938c9-2678-4985-bbe3-8f15d9a3302b\") " Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.338945 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea938c9-2678-4985-bbe3-8f15d9a3302b-kube-api-access-v7v6v" (OuterVolumeSpecName: "kube-api-access-v7v6v") pod "fea938c9-2678-4985-bbe3-8f15d9a3302b" (UID: "fea938c9-2678-4985-bbe3-8f15d9a3302b"). InnerVolumeSpecName "kube-api-access-v7v6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.359462 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "fea938c9-2678-4985-bbe3-8f15d9a3302b" (UID: "fea938c9-2678-4985-bbe3-8f15d9a3302b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.374068 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fea938c9-2678-4985-bbe3-8f15d9a3302b" (UID: "fea938c9-2678-4985-bbe3-8f15d9a3302b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.383195 4930 scope.go:117] "RemoveContainer" containerID="3a7ed2b94fa6114dd857cba24ec3d5a5f49d0476fda615c89f4c741f72768a45" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.436037 4930 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.436074 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7v6v\" (UniqueName: \"kubernetes.io/projected/fea938c9-2678-4985-bbe3-8f15d9a3302b-kube-api-access-v7v6v\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.436087 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fea938c9-2678-4985-bbe3-8f15d9a3302b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.461265 4930 scope.go:117] "RemoveContainer" containerID="bc1e1e995ba678bcfea2404ea96a1f998466501417fdbf1929fe713d23a9d9f0" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.496851 4930 scope.go:117] "RemoveContainer" containerID="c1cbb6ecc6454effac40cf4b3df72296e2d98a939dc097da1c2eea2579427aaf" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.712422 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" event={"ID":"fea938c9-2678-4985-bbe3-8f15d9a3302b","Type":"ContainerDied","Data":"3b2ce5238f89423d53ad4c8c97b0258087166aaeae002fb9a062df1dcf93af0f"} Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.712460 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b2ce5238f89423d53ad4c8c97b0258087166aaeae002fb9a062df1dcf93af0f" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.712521 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gglqx" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.786350 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4"] Nov 24 12:29:19 crc kubenswrapper[4930]: E1124 12:29:19.786882 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea938c9-2678-4985-bbe3-8f15d9a3302b" containerName="ssh-known-hosts-edpm-deployment" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.786908 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea938c9-2678-4985-bbe3-8f15d9a3302b" containerName="ssh-known-hosts-edpm-deployment" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.787132 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea938c9-2678-4985-bbe3-8f15d9a3302b" containerName="ssh-known-hosts-edpm-deployment" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.787996 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.789951 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.790224 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.790723 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.790797 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.803122 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4"] Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.946625 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5vt7\" (UniqueName: \"kubernetes.io/projected/2454068c-7c38-4a67-8830-63a6b0add307-kube-api-access-w5vt7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gzmr4\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.946805 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gzmr4\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:19 crc kubenswrapper[4930]: I1124 12:29:19.946872 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gzmr4\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:20 crc kubenswrapper[4930]: I1124 12:29:20.048262 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gzmr4\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:20 crc kubenswrapper[4930]: I1124 12:29:20.048645 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gzmr4\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:20 crc kubenswrapper[4930]: I1124 12:29:20.048817 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5vt7\" (UniqueName: \"kubernetes.io/projected/2454068c-7c38-4a67-8830-63a6b0add307-kube-api-access-w5vt7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gzmr4\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:20 crc kubenswrapper[4930]: I1124 12:29:20.054245 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gzmr4\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:20 crc kubenswrapper[4930]: I1124 12:29:20.054764 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gzmr4\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:20 crc kubenswrapper[4930]: I1124 12:29:20.067836 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5vt7\" (UniqueName: \"kubernetes.io/projected/2454068c-7c38-4a67-8830-63a6b0add307-kube-api-access-w5vt7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gzmr4\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:20 crc kubenswrapper[4930]: I1124 12:29:20.113001 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:20 crc kubenswrapper[4930]: I1124 12:29:20.594719 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4"] Nov 24 12:29:20 crc kubenswrapper[4930]: I1124 12:29:20.720577 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" event={"ID":"2454068c-7c38-4a67-8830-63a6b0add307","Type":"ContainerStarted","Data":"1beb8573d5de4adb98d11e5bb223bbed1f55968bbde98f1766706e3093b755e9"} Nov 24 12:29:21 crc kubenswrapper[4930]: I1124 12:29:21.728891 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" event={"ID":"2454068c-7c38-4a67-8830-63a6b0add307","Type":"ContainerStarted","Data":"0e86e4e1e74f5227415a9bcb81ca65596f9362e61c85e8fb8d70d7ebf84245b7"} Nov 24 12:29:21 crc kubenswrapper[4930]: I1124 12:29:21.742865 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" podStartSLOduration=2.115535186 podStartE2EDuration="2.742843447s" podCreationTimestamp="2025-11-24 12:29:19 +0000 UTC" firstStartedPulling="2025-11-24 12:29:20.600892501 +0000 UTC m=+1807.215220451" lastFinishedPulling="2025-11-24 12:29:21.228200722 +0000 UTC m=+1807.842528712" observedRunningTime="2025-11-24 12:29:21.742177528 +0000 UTC m=+1808.356505478" watchObservedRunningTime="2025-11-24 12:29:21.742843447 +0000 UTC m=+1808.357171387" Nov 24 12:29:22 crc kubenswrapper[4930]: I1124 12:29:22.085074 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:29:22 crc kubenswrapper[4930]: E1124 12:29:22.085671 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:29:29 crc kubenswrapper[4930]: I1124 12:29:29.811891 4930 generic.go:334] "Generic (PLEG): container finished" podID="2454068c-7c38-4a67-8830-63a6b0add307" containerID="0e86e4e1e74f5227415a9bcb81ca65596f9362e61c85e8fb8d70d7ebf84245b7" exitCode=0 Nov 24 12:29:29 crc kubenswrapper[4930]: I1124 12:29:29.811991 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" event={"ID":"2454068c-7c38-4a67-8830-63a6b0add307","Type":"ContainerDied","Data":"0e86e4e1e74f5227415a9bcb81ca65596f9362e61c85e8fb8d70d7ebf84245b7"} Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.219838 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.414454 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-inventory\") pod \"2454068c-7c38-4a67-8830-63a6b0add307\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.415630 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-ssh-key\") pod \"2454068c-7c38-4a67-8830-63a6b0add307\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.415862 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5vt7\" (UniqueName: \"kubernetes.io/projected/2454068c-7c38-4a67-8830-63a6b0add307-kube-api-access-w5vt7\") pod \"2454068c-7c38-4a67-8830-63a6b0add307\" (UID: \"2454068c-7c38-4a67-8830-63a6b0add307\") " Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.419399 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2454068c-7c38-4a67-8830-63a6b0add307-kube-api-access-w5vt7" (OuterVolumeSpecName: "kube-api-access-w5vt7") pod "2454068c-7c38-4a67-8830-63a6b0add307" (UID: "2454068c-7c38-4a67-8830-63a6b0add307"). InnerVolumeSpecName "kube-api-access-w5vt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.443660 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2454068c-7c38-4a67-8830-63a6b0add307" (UID: "2454068c-7c38-4a67-8830-63a6b0add307"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.452696 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-inventory" (OuterVolumeSpecName: "inventory") pod "2454068c-7c38-4a67-8830-63a6b0add307" (UID: "2454068c-7c38-4a67-8830-63a6b0add307"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.517957 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5vt7\" (UniqueName: \"kubernetes.io/projected/2454068c-7c38-4a67-8830-63a6b0add307-kube-api-access-w5vt7\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.517990 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.518001 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2454068c-7c38-4a67-8830-63a6b0add307-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.833994 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" event={"ID":"2454068c-7c38-4a67-8830-63a6b0add307","Type":"ContainerDied","Data":"1beb8573d5de4adb98d11e5bb223bbed1f55968bbde98f1766706e3093b755e9"} Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.834330 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1beb8573d5de4adb98d11e5bb223bbed1f55968bbde98f1766706e3093b755e9" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.834113 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gzmr4" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.919379 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87"] Nov 24 12:29:31 crc kubenswrapper[4930]: E1124 12:29:31.919893 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2454068c-7c38-4a67-8830-63a6b0add307" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.919917 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="2454068c-7c38-4a67-8830-63a6b0add307" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.920204 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="2454068c-7c38-4a67-8830-63a6b0add307" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.920999 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.924177 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.925472 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.926836 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.931587 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:29:31 crc kubenswrapper[4930]: I1124 12:29:31.935399 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87"] Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.028259 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-84f87\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.028302 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-84f87\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.028343 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwr4h\" (UniqueName: \"kubernetes.io/projected/05ff1b01-0d59-4a45-9683-41ae2e8163bc-kube-api-access-hwr4h\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-84f87\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.130752 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-84f87\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.130810 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-84f87\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.130851 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwr4h\" (UniqueName: \"kubernetes.io/projected/05ff1b01-0d59-4a45-9683-41ae2e8163bc-kube-api-access-hwr4h\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-84f87\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.135585 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-84f87\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.149106 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-84f87\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.163441 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwr4h\" (UniqueName: \"kubernetes.io/projected/05ff1b01-0d59-4a45-9683-41ae2e8163bc-kube-api-access-hwr4h\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-84f87\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.241452 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.738582 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87"] Nov 24 12:29:32 crc kubenswrapper[4930]: W1124 12:29:32.748367 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05ff1b01_0d59_4a45_9683_41ae2e8163bc.slice/crio-a4ab9121d3d60377d32fca0746b8fe889c2c0608a3c97ceac3d38260be438c97 WatchSource:0}: Error finding container a4ab9121d3d60377d32fca0746b8fe889c2c0608a3c97ceac3d38260be438c97: Status 404 returned error can't find the container with id a4ab9121d3d60377d32fca0746b8fe889c2c0608a3c97ceac3d38260be438c97 Nov 24 12:29:32 crc kubenswrapper[4930]: I1124 12:29:32.844468 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" event={"ID":"05ff1b01-0d59-4a45-9683-41ae2e8163bc","Type":"ContainerStarted","Data":"a4ab9121d3d60377d32fca0746b8fe889c2c0608a3c97ceac3d38260be438c97"} Nov 24 12:29:33 crc kubenswrapper[4930]: I1124 12:29:33.856523 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" event={"ID":"05ff1b01-0d59-4a45-9683-41ae2e8163bc","Type":"ContainerStarted","Data":"8d46e71adb75bd008ac9a24ba191b8a793c1b04c0dbace5c0c81c9118fae6bef"} Nov 24 12:29:33 crc kubenswrapper[4930]: I1124 12:29:33.885437 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" podStartSLOduration=2.3934352580000002 podStartE2EDuration="2.885416498s" podCreationTimestamp="2025-11-24 12:29:31 +0000 UTC" firstStartedPulling="2025-11-24 12:29:32.751399803 +0000 UTC m=+1819.365727753" lastFinishedPulling="2025-11-24 12:29:33.243381053 +0000 UTC m=+1819.857708993" observedRunningTime="2025-11-24 12:29:33.873105063 +0000 UTC m=+1820.487433033" watchObservedRunningTime="2025-11-24 12:29:33.885416498 +0000 UTC m=+1820.499744468" Nov 24 12:29:36 crc kubenswrapper[4930]: I1124 12:29:36.035317 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4rdst"] Nov 24 12:29:36 crc kubenswrapper[4930]: I1124 12:29:36.043914 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4rdst"] Nov 24 12:29:36 crc kubenswrapper[4930]: I1124 12:29:36.095446 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c4d14a-c3de-4c3b-a2a8-2148d04821d6" path="/var/lib/kubelet/pods/29c4d14a-c3de-4c3b-a2a8-2148d04821d6/volumes" Nov 24 12:29:37 crc kubenswrapper[4930]: I1124 12:29:37.084382 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:29:37 crc kubenswrapper[4930]: E1124 12:29:37.085743 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:29:42 crc kubenswrapper[4930]: I1124 12:29:42.924335 4930 generic.go:334] "Generic (PLEG): container finished" podID="05ff1b01-0d59-4a45-9683-41ae2e8163bc" containerID="8d46e71adb75bd008ac9a24ba191b8a793c1b04c0dbace5c0c81c9118fae6bef" exitCode=0 Nov 24 12:29:42 crc kubenswrapper[4930]: I1124 12:29:42.924470 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" event={"ID":"05ff1b01-0d59-4a45-9683-41ae2e8163bc","Type":"ContainerDied","Data":"8d46e71adb75bd008ac9a24ba191b8a793c1b04c0dbace5c0c81c9118fae6bef"} Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.367598 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.569277 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwr4h\" (UniqueName: \"kubernetes.io/projected/05ff1b01-0d59-4a45-9683-41ae2e8163bc-kube-api-access-hwr4h\") pod \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.569415 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-ssh-key\") pod \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.569490 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-inventory\") pod \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\" (UID: \"05ff1b01-0d59-4a45-9683-41ae2e8163bc\") " Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.577672 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ff1b01-0d59-4a45-9683-41ae2e8163bc-kube-api-access-hwr4h" (OuterVolumeSpecName: "kube-api-access-hwr4h") pod "05ff1b01-0d59-4a45-9683-41ae2e8163bc" (UID: "05ff1b01-0d59-4a45-9683-41ae2e8163bc"). InnerVolumeSpecName "kube-api-access-hwr4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.596260 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "05ff1b01-0d59-4a45-9683-41ae2e8163bc" (UID: "05ff1b01-0d59-4a45-9683-41ae2e8163bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.597440 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-inventory" (OuterVolumeSpecName: "inventory") pod "05ff1b01-0d59-4a45-9683-41ae2e8163bc" (UID: "05ff1b01-0d59-4a45-9683-41ae2e8163bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.671971 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.672009 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05ff1b01-0d59-4a45-9683-41ae2e8163bc-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.672023 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwr4h\" (UniqueName: \"kubernetes.io/projected/05ff1b01-0d59-4a45-9683-41ae2e8163bc-kube-api-access-hwr4h\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.945225 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" event={"ID":"05ff1b01-0d59-4a45-9683-41ae2e8163bc","Type":"ContainerDied","Data":"a4ab9121d3d60377d32fca0746b8fe889c2c0608a3c97ceac3d38260be438c97"} Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.945267 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4ab9121d3d60377d32fca0746b8fe889c2c0608a3c97ceac3d38260be438c97" Nov 24 12:29:44 crc kubenswrapper[4930]: I1124 12:29:44.945273 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-84f87" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.109628 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w"] Nov 24 12:29:45 crc kubenswrapper[4930]: E1124 12:29:45.110165 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ff1b01-0d59-4a45-9683-41ae2e8163bc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.110184 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ff1b01-0d59-4a45-9683-41ae2e8163bc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.110441 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ff1b01-0d59-4a45-9683-41ae2e8163bc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.111235 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.114818 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.114864 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.115230 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.115126 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.115405 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.115476 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.115563 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.120137 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.129348 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w"] Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184520 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184602 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184638 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184671 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184734 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184766 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184797 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184853 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184881 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184922 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g777q\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-kube-api-access-g777q\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.184988 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.185834 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.185889 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.185923 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.287807 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.287851 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.287891 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.287913 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.287944 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g777q\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-kube-api-access-g777q\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.287986 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.288022 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.288050 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.288079 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.288099 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.288123 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.288141 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.288161 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.288196 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.291233 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.292506 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.292718 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.293429 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.293579 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.294274 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.294734 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.295499 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.296764 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.296938 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.297161 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.297806 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.305766 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g777q\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-kube-api-access-g777q\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.306068 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.446085 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:29:45 crc kubenswrapper[4930]: I1124 12:29:45.977892 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w"] Nov 24 12:29:46 crc kubenswrapper[4930]: I1124 12:29:46.965514 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" event={"ID":"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b","Type":"ContainerStarted","Data":"97438f3395d32f94b3a97919e9e16c7d314578c1a046e071c2a9d9031b5186fd"} Nov 24 12:29:46 crc kubenswrapper[4930]: I1124 12:29:46.965847 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" event={"ID":"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b","Type":"ContainerStarted","Data":"542633c71c5cdf070ba6d2e087f8ab4111cf6763b47594dae8c2c0d895ceec5c"} Nov 24 12:29:47 crc kubenswrapper[4930]: I1124 12:29:47.002264 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" podStartSLOduration=1.536098447 podStartE2EDuration="2.002244082s" podCreationTimestamp="2025-11-24 12:29:45 +0000 UTC" firstStartedPulling="2025-11-24 12:29:45.995026938 +0000 UTC m=+1832.609354898" lastFinishedPulling="2025-11-24 12:29:46.461172573 +0000 UTC m=+1833.075500533" observedRunningTime="2025-11-24 12:29:46.980871315 +0000 UTC m=+1833.595199265" watchObservedRunningTime="2025-11-24 12:29:47.002244082 +0000 UTC m=+1833.616572032" Nov 24 12:29:49 crc kubenswrapper[4930]: I1124 12:29:49.084996 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:29:49 crc kubenswrapper[4930]: E1124 12:29:49.085616 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.150864 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr"] Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.152826 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.156265 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.156606 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.164127 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr"] Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.333175 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f9064167-31db-4951-8ab3-e17d70f5537f-secret-volume\") pod \"collect-profiles-29399790-zctdr\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.333233 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9064167-31db-4951-8ab3-e17d70f5537f-config-volume\") pod \"collect-profiles-29399790-zctdr\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.333258 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xbqm\" (UniqueName: \"kubernetes.io/projected/f9064167-31db-4951-8ab3-e17d70f5537f-kube-api-access-5xbqm\") pod \"collect-profiles-29399790-zctdr\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.435457 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f9064167-31db-4951-8ab3-e17d70f5537f-secret-volume\") pod \"collect-profiles-29399790-zctdr\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.435520 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9064167-31db-4951-8ab3-e17d70f5537f-config-volume\") pod \"collect-profiles-29399790-zctdr\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.435575 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xbqm\" (UniqueName: \"kubernetes.io/projected/f9064167-31db-4951-8ab3-e17d70f5537f-kube-api-access-5xbqm\") pod \"collect-profiles-29399790-zctdr\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.436458 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9064167-31db-4951-8ab3-e17d70f5537f-config-volume\") pod \"collect-profiles-29399790-zctdr\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.441842 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f9064167-31db-4951-8ab3-e17d70f5537f-secret-volume\") pod \"collect-profiles-29399790-zctdr\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.452910 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xbqm\" (UniqueName: \"kubernetes.io/projected/f9064167-31db-4951-8ab3-e17d70f5537f-kube-api-access-5xbqm\") pod \"collect-profiles-29399790-zctdr\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.473242 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:00 crc kubenswrapper[4930]: I1124 12:30:00.939419 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr"] Nov 24 12:30:01 crc kubenswrapper[4930]: I1124 12:30:01.109841 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" event={"ID":"f9064167-31db-4951-8ab3-e17d70f5537f","Type":"ContainerStarted","Data":"32d8955dad0da73afdbd7f36ef31d8ef50c75b1249ebd7f53a7a2013649cf517"} Nov 24 12:30:02 crc kubenswrapper[4930]: I1124 12:30:02.122619 4930 generic.go:334] "Generic (PLEG): container finished" podID="f9064167-31db-4951-8ab3-e17d70f5537f" containerID="c4b2fb479b38f1f30e032342defda23a60d91da8455815d819d1b9c082dd0207" exitCode=0 Nov 24 12:30:02 crc kubenswrapper[4930]: I1124 12:30:02.122737 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" event={"ID":"f9064167-31db-4951-8ab3-e17d70f5537f","Type":"ContainerDied","Data":"c4b2fb479b38f1f30e032342defda23a60d91da8455815d819d1b9c082dd0207"} Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.085235 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:30:03 crc kubenswrapper[4930]: E1124 12:30:03.085625 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.472850 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.595084 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xbqm\" (UniqueName: \"kubernetes.io/projected/f9064167-31db-4951-8ab3-e17d70f5537f-kube-api-access-5xbqm\") pod \"f9064167-31db-4951-8ab3-e17d70f5537f\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.595214 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f9064167-31db-4951-8ab3-e17d70f5537f-secret-volume\") pod \"f9064167-31db-4951-8ab3-e17d70f5537f\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.595245 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9064167-31db-4951-8ab3-e17d70f5537f-config-volume\") pod \"f9064167-31db-4951-8ab3-e17d70f5537f\" (UID: \"f9064167-31db-4951-8ab3-e17d70f5537f\") " Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.596347 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9064167-31db-4951-8ab3-e17d70f5537f-config-volume" (OuterVolumeSpecName: "config-volume") pod "f9064167-31db-4951-8ab3-e17d70f5537f" (UID: "f9064167-31db-4951-8ab3-e17d70f5537f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.600694 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9064167-31db-4951-8ab3-e17d70f5537f-kube-api-access-5xbqm" (OuterVolumeSpecName: "kube-api-access-5xbqm") pod "f9064167-31db-4951-8ab3-e17d70f5537f" (UID: "f9064167-31db-4951-8ab3-e17d70f5537f"). InnerVolumeSpecName "kube-api-access-5xbqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.600760 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9064167-31db-4951-8ab3-e17d70f5537f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f9064167-31db-4951-8ab3-e17d70f5537f" (UID: "f9064167-31db-4951-8ab3-e17d70f5537f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.697888 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xbqm\" (UniqueName: \"kubernetes.io/projected/f9064167-31db-4951-8ab3-e17d70f5537f-kube-api-access-5xbqm\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.697931 4930 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f9064167-31db-4951-8ab3-e17d70f5537f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:03 crc kubenswrapper[4930]: I1124 12:30:03.697940 4930 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9064167-31db-4951-8ab3-e17d70f5537f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:04 crc kubenswrapper[4930]: I1124 12:30:04.138422 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" event={"ID":"f9064167-31db-4951-8ab3-e17d70f5537f","Type":"ContainerDied","Data":"32d8955dad0da73afdbd7f36ef31d8ef50c75b1249ebd7f53a7a2013649cf517"} Nov 24 12:30:04 crc kubenswrapper[4930]: I1124 12:30:04.138872 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32d8955dad0da73afdbd7f36ef31d8ef50c75b1249ebd7f53a7a2013649cf517" Nov 24 12:30:04 crc kubenswrapper[4930]: I1124 12:30:04.138829 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-zctdr" Nov 24 12:30:14 crc kubenswrapper[4930]: I1124 12:30:14.091163 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:30:14 crc kubenswrapper[4930]: E1124 12:30:14.091991 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:30:19 crc kubenswrapper[4930]: I1124 12:30:19.580745 4930 scope.go:117] "RemoveContainer" containerID="a6da17f8192d6a6c47009adffa120eef41a379ca52741dee1c736409030e9825" Nov 24 12:30:25 crc kubenswrapper[4930]: I1124 12:30:25.330264 4930 generic.go:334] "Generic (PLEG): container finished" podID="dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" containerID="97438f3395d32f94b3a97919e9e16c7d314578c1a046e071c2a9d9031b5186fd" exitCode=0 Nov 24 12:30:25 crc kubenswrapper[4930]: I1124 12:30:25.330382 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" event={"ID":"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b","Type":"ContainerDied","Data":"97438f3395d32f94b3a97919e9e16c7d314578c1a046e071c2a9d9031b5186fd"} Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.711012 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777516 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ssh-key\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777600 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-telemetry-combined-ca-bundle\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777668 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g777q\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-kube-api-access-g777q\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777696 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-libvirt-combined-ca-bundle\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777729 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777772 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777814 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777836 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-nova-combined-ca-bundle\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777889 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-inventory\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777922 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ovn-combined-ca-bundle\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.777970 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.778021 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-repo-setup-combined-ca-bundle\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.778055 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-neutron-metadata-combined-ca-bundle\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.778084 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-bootstrap-combined-ca-bundle\") pod \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\" (UID: \"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b\") " Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.783751 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.784205 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.785353 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.785406 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-kube-api-access-g777q" (OuterVolumeSpecName: "kube-api-access-g777q") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "kube-api-access-g777q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.785496 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.785607 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.785944 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.788632 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.790986 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.791043 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.791660 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.792086 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.812997 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.823609 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-inventory" (OuterVolumeSpecName: "inventory") pod "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" (UID: "dbe1f36a-7423-4635-bc7e-7ad5ba208b8b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880855 4930 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880888 4930 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880899 4930 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880910 4930 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880922 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880930 4930 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880938 4930 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880949 4930 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880959 4930 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880967 4930 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880977 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880985 4930 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.880993 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g777q\" (UniqueName: \"kubernetes.io/projected/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-kube-api-access-g777q\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:26 crc kubenswrapper[4930]: I1124 12:30:26.881003 4930 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe1f36a-7423-4635-bc7e-7ad5ba208b8b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.345862 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" event={"ID":"dbe1f36a-7423-4635-bc7e-7ad5ba208b8b","Type":"ContainerDied","Data":"542633c71c5cdf070ba6d2e087f8ab4111cf6763b47594dae8c2c0d895ceec5c"} Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.345905 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="542633c71c5cdf070ba6d2e087f8ab4111cf6763b47594dae8c2c0d895ceec5c" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.345972 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.439525 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k"] Nov 24 12:30:27 crc kubenswrapper[4930]: E1124 12:30:27.439940 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9064167-31db-4951-8ab3-e17d70f5537f" containerName="collect-profiles" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.439956 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9064167-31db-4951-8ab3-e17d70f5537f" containerName="collect-profiles" Nov 24 12:30:27 crc kubenswrapper[4930]: E1124 12:30:27.439996 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.440004 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.440183 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9064167-31db-4951-8ab3-e17d70f5537f" containerName="collect-profiles" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.440216 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe1f36a-7423-4635-bc7e-7ad5ba208b8b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.440834 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.444058 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.444137 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.444287 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.444445 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.444508 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.450991 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k"] Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.520200 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh8z8\" (UniqueName: \"kubernetes.io/projected/48d052f4-e44f-45e2-856a-08346f84f5b8-kube-api-access-jh8z8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.520653 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/48d052f4-e44f-45e2-856a-08346f84f5b8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.520916 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.521199 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.521328 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.624162 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.624234 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.624305 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh8z8\" (UniqueName: \"kubernetes.io/projected/48d052f4-e44f-45e2-856a-08346f84f5b8-kube-api-access-jh8z8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.624382 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/48d052f4-e44f-45e2-856a-08346f84f5b8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.624432 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.626759 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/48d052f4-e44f-45e2-856a-08346f84f5b8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.630329 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.631094 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.642019 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.646383 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh8z8\" (UniqueName: \"kubernetes.io/projected/48d052f4-e44f-45e2-856a-08346f84f5b8-kube-api-access-jh8z8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t562k\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:27 crc kubenswrapper[4930]: I1124 12:30:27.758996 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:30:28 crc kubenswrapper[4930]: I1124 12:30:28.085165 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:30:28 crc kubenswrapper[4930]: E1124 12:30:28.085791 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:30:28 crc kubenswrapper[4930]: I1124 12:30:28.378908 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k"] Nov 24 12:30:29 crc kubenswrapper[4930]: I1124 12:30:29.364746 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" event={"ID":"48d052f4-e44f-45e2-856a-08346f84f5b8","Type":"ContainerStarted","Data":"145a4b0fce448e23af73b890a6c6c81b1a8e308db8e6f35dce6dd86ef56d034e"} Nov 24 12:30:29 crc kubenswrapper[4930]: I1124 12:30:29.365093 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" event={"ID":"48d052f4-e44f-45e2-856a-08346f84f5b8","Type":"ContainerStarted","Data":"6ee38d247b952187da36f37a86e0c6c4527594d6093039f9a417d0d0b88c15b7"} Nov 24 12:30:29 crc kubenswrapper[4930]: I1124 12:30:29.384329 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" podStartSLOduration=1.8914288240000001 podStartE2EDuration="2.384308182s" podCreationTimestamp="2025-11-24 12:30:27 +0000 UTC" firstStartedPulling="2025-11-24 12:30:28.38547929 +0000 UTC m=+1874.999807240" lastFinishedPulling="2025-11-24 12:30:28.878358638 +0000 UTC m=+1875.492686598" observedRunningTime="2025-11-24 12:30:29.380935825 +0000 UTC m=+1875.995263785" watchObservedRunningTime="2025-11-24 12:30:29.384308182 +0000 UTC m=+1875.998636142" Nov 24 12:30:40 crc kubenswrapper[4930]: I1124 12:30:40.084594 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:30:40 crc kubenswrapper[4930]: E1124 12:30:40.085522 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.381921 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mf6cd"] Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.384177 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.391353 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mf6cd"] Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.427399 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktc7j\" (UniqueName: \"kubernetes.io/projected/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-kube-api-access-ktc7j\") pod \"redhat-operators-mf6cd\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.427442 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-catalog-content\") pod \"redhat-operators-mf6cd\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.427481 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-utilities\") pod \"redhat-operators-mf6cd\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.528801 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktc7j\" (UniqueName: \"kubernetes.io/projected/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-kube-api-access-ktc7j\") pod \"redhat-operators-mf6cd\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.528845 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-catalog-content\") pod \"redhat-operators-mf6cd\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.528881 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-utilities\") pod \"redhat-operators-mf6cd\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.529291 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-utilities\") pod \"redhat-operators-mf6cd\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.529482 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-catalog-content\") pod \"redhat-operators-mf6cd\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.550510 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktc7j\" (UniqueName: \"kubernetes.io/projected/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-kube-api-access-ktc7j\") pod \"redhat-operators-mf6cd\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.584409 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n2bqw"] Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.586268 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.604279 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n2bqw"] Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.632331 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-utilities\") pod \"certified-operators-n2bqw\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.632457 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7sb\" (UniqueName: \"kubernetes.io/projected/4a0ee175-51ac-4313-bef6-278021bf7077-kube-api-access-hw7sb\") pod \"certified-operators-n2bqw\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.632531 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-catalog-content\") pod \"certified-operators-n2bqw\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.714080 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.734370 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7sb\" (UniqueName: \"kubernetes.io/projected/4a0ee175-51ac-4313-bef6-278021bf7077-kube-api-access-hw7sb\") pod \"certified-operators-n2bqw\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.734504 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-catalog-content\") pod \"certified-operators-n2bqw\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.734599 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-utilities\") pod \"certified-operators-n2bqw\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.735272 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-catalog-content\") pod \"certified-operators-n2bqw\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.735303 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-utilities\") pod \"certified-operators-n2bqw\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.751981 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7sb\" (UniqueName: \"kubernetes.io/projected/4a0ee175-51ac-4313-bef6-278021bf7077-kube-api-access-hw7sb\") pod \"certified-operators-n2bqw\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:42 crc kubenswrapper[4930]: I1124 12:30:42.916203 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:43 crc kubenswrapper[4930]: I1124 12:30:43.046927 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mf6cd"] Nov 24 12:30:43 crc kubenswrapper[4930]: I1124 12:30:43.521527 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n2bqw"] Nov 24 12:30:43 crc kubenswrapper[4930]: I1124 12:30:43.523088 4930 generic.go:334] "Generic (PLEG): container finished" podID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerID="c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031" exitCode=0 Nov 24 12:30:43 crc kubenswrapper[4930]: I1124 12:30:43.523206 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf6cd" event={"ID":"9d23c44c-6fd4-463b-8ed1-2f138fd0122c","Type":"ContainerDied","Data":"c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031"} Nov 24 12:30:43 crc kubenswrapper[4930]: I1124 12:30:43.523296 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf6cd" event={"ID":"9d23c44c-6fd4-463b-8ed1-2f138fd0122c","Type":"ContainerStarted","Data":"b2cd52af513054657a8e7820f436b979c3d170c9e149a42b890fb6903bc4492b"} Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.533680 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf6cd" event={"ID":"9d23c44c-6fd4-463b-8ed1-2f138fd0122c","Type":"ContainerStarted","Data":"bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa"} Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.535782 4930 generic.go:334] "Generic (PLEG): container finished" podID="4a0ee175-51ac-4313-bef6-278021bf7077" containerID="423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d" exitCode=0 Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.535829 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2bqw" event={"ID":"4a0ee175-51ac-4313-bef6-278021bf7077","Type":"ContainerDied","Data":"423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d"} Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.535872 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2bqw" event={"ID":"4a0ee175-51ac-4313-bef6-278021bf7077","Type":"ContainerStarted","Data":"18981ba9ec48e3491cc1673b5326c7f92ff5a6125d36115af00f62eb368f226e"} Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.771447 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k6w82"] Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.774007 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.782424 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-utilities\") pod \"community-operators-k6w82\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.782612 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6jm8\" (UniqueName: \"kubernetes.io/projected/4d609896-3f79-4a48-801a-1d8919f1066d-kube-api-access-v6jm8\") pod \"community-operators-k6w82\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.782757 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-catalog-content\") pod \"community-operators-k6w82\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.790475 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k6w82"] Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.884330 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6jm8\" (UniqueName: \"kubernetes.io/projected/4d609896-3f79-4a48-801a-1d8919f1066d-kube-api-access-v6jm8\") pod \"community-operators-k6w82\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.884437 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-catalog-content\") pod \"community-operators-k6w82\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.884489 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-utilities\") pod \"community-operators-k6w82\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.885081 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-utilities\") pod \"community-operators-k6w82\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.885092 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-catalog-content\") pod \"community-operators-k6w82\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.914293 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6jm8\" (UniqueName: \"kubernetes.io/projected/4d609896-3f79-4a48-801a-1d8919f1066d-kube-api-access-v6jm8\") pod \"community-operators-k6w82\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.982900 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rxkkj"] Nov 24 12:30:44 crc kubenswrapper[4930]: I1124 12:30:44.986044 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.015605 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxkkj"] Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.087567 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njgw9\" (UniqueName: \"kubernetes.io/projected/0898c92f-ad7b-49b4-9111-2abe65697122-kube-api-access-njgw9\") pod \"redhat-marketplace-rxkkj\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.087635 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-utilities\") pod \"redhat-marketplace-rxkkj\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.087720 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-catalog-content\") pod \"redhat-marketplace-rxkkj\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.105997 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.190956 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-catalog-content\") pod \"redhat-marketplace-rxkkj\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.191272 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njgw9\" (UniqueName: \"kubernetes.io/projected/0898c92f-ad7b-49b4-9111-2abe65697122-kube-api-access-njgw9\") pod \"redhat-marketplace-rxkkj\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.191315 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-utilities\") pod \"redhat-marketplace-rxkkj\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.191386 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-catalog-content\") pod \"redhat-marketplace-rxkkj\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.191601 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-utilities\") pod \"redhat-marketplace-rxkkj\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.209285 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njgw9\" (UniqueName: \"kubernetes.io/projected/0898c92f-ad7b-49b4-9111-2abe65697122-kube-api-access-njgw9\") pod \"redhat-marketplace-rxkkj\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.315986 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.704408 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k6w82"] Nov 24 12:30:45 crc kubenswrapper[4930]: W1124 12:30:45.706648 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d609896_3f79_4a48_801a_1d8919f1066d.slice/crio-c89482447542487eff59c334aea7d64441193cd51ecf0720db5d0911f72e3319 WatchSource:0}: Error finding container c89482447542487eff59c334aea7d64441193cd51ecf0720db5d0911f72e3319: Status 404 returned error can't find the container with id c89482447542487eff59c334aea7d64441193cd51ecf0720db5d0911f72e3319 Nov 24 12:30:45 crc kubenswrapper[4930]: I1124 12:30:45.824252 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxkkj"] Nov 24 12:30:45 crc kubenswrapper[4930]: W1124 12:30:45.845770 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0898c92f_ad7b_49b4_9111_2abe65697122.slice/crio-5566cb5c2049e20bdbc56c5c66318458858a8bf2d51676580daf9eda1e2399d5 WatchSource:0}: Error finding container 5566cb5c2049e20bdbc56c5c66318458858a8bf2d51676580daf9eda1e2399d5: Status 404 returned error can't find the container with id 5566cb5c2049e20bdbc56c5c66318458858a8bf2d51676580daf9eda1e2399d5 Nov 24 12:30:46 crc kubenswrapper[4930]: I1124 12:30:46.553561 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxkkj" event={"ID":"0898c92f-ad7b-49b4-9111-2abe65697122","Type":"ContainerStarted","Data":"2673e23b59c34902c1e104f46731ebd11d2120911f63739e35a3b3201cc34764"} Nov 24 12:30:46 crc kubenswrapper[4930]: I1124 12:30:46.553617 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxkkj" event={"ID":"0898c92f-ad7b-49b4-9111-2abe65697122","Type":"ContainerStarted","Data":"5566cb5c2049e20bdbc56c5c66318458858a8bf2d51676580daf9eda1e2399d5"} Nov 24 12:30:46 crc kubenswrapper[4930]: I1124 12:30:46.555376 4930 generic.go:334] "Generic (PLEG): container finished" podID="4a0ee175-51ac-4313-bef6-278021bf7077" containerID="ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b" exitCode=0 Nov 24 12:30:46 crc kubenswrapper[4930]: I1124 12:30:46.555424 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2bqw" event={"ID":"4a0ee175-51ac-4313-bef6-278021bf7077","Type":"ContainerDied","Data":"ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b"} Nov 24 12:30:46 crc kubenswrapper[4930]: I1124 12:30:46.558087 4930 generic.go:334] "Generic (PLEG): container finished" podID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerID="bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa" exitCode=0 Nov 24 12:30:46 crc kubenswrapper[4930]: I1124 12:30:46.558126 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf6cd" event={"ID":"9d23c44c-6fd4-463b-8ed1-2f138fd0122c","Type":"ContainerDied","Data":"bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa"} Nov 24 12:30:46 crc kubenswrapper[4930]: I1124 12:30:46.559774 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6w82" event={"ID":"4d609896-3f79-4a48-801a-1d8919f1066d","Type":"ContainerStarted","Data":"238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55"} Nov 24 12:30:46 crc kubenswrapper[4930]: I1124 12:30:46.559813 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6w82" event={"ID":"4d609896-3f79-4a48-801a-1d8919f1066d","Type":"ContainerStarted","Data":"c89482447542487eff59c334aea7d64441193cd51ecf0720db5d0911f72e3319"} Nov 24 12:30:47 crc kubenswrapper[4930]: I1124 12:30:47.572391 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf6cd" event={"ID":"9d23c44c-6fd4-463b-8ed1-2f138fd0122c","Type":"ContainerStarted","Data":"4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a"} Nov 24 12:30:47 crc kubenswrapper[4930]: I1124 12:30:47.576045 4930 generic.go:334] "Generic (PLEG): container finished" podID="4d609896-3f79-4a48-801a-1d8919f1066d" containerID="238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55" exitCode=0 Nov 24 12:30:47 crc kubenswrapper[4930]: I1124 12:30:47.576115 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6w82" event={"ID":"4d609896-3f79-4a48-801a-1d8919f1066d","Type":"ContainerDied","Data":"238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55"} Nov 24 12:30:47 crc kubenswrapper[4930]: I1124 12:30:47.579818 4930 generic.go:334] "Generic (PLEG): container finished" podID="0898c92f-ad7b-49b4-9111-2abe65697122" containerID="2673e23b59c34902c1e104f46731ebd11d2120911f63739e35a3b3201cc34764" exitCode=0 Nov 24 12:30:47 crc kubenswrapper[4930]: I1124 12:30:47.579911 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxkkj" event={"ID":"0898c92f-ad7b-49b4-9111-2abe65697122","Type":"ContainerDied","Data":"2673e23b59c34902c1e104f46731ebd11d2120911f63739e35a3b3201cc34764"} Nov 24 12:30:47 crc kubenswrapper[4930]: I1124 12:30:47.584234 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2bqw" event={"ID":"4a0ee175-51ac-4313-bef6-278021bf7077","Type":"ContainerStarted","Data":"2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760"} Nov 24 12:30:47 crc kubenswrapper[4930]: I1124 12:30:47.600477 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mf6cd" podStartSLOduration=2.139004176 podStartE2EDuration="5.600456991s" podCreationTimestamp="2025-11-24 12:30:42 +0000 UTC" firstStartedPulling="2025-11-24 12:30:43.526419851 +0000 UTC m=+1890.140747801" lastFinishedPulling="2025-11-24 12:30:46.987872666 +0000 UTC m=+1893.602200616" observedRunningTime="2025-11-24 12:30:47.594065966 +0000 UTC m=+1894.208393926" watchObservedRunningTime="2025-11-24 12:30:47.600456991 +0000 UTC m=+1894.214784951" Nov 24 12:30:47 crc kubenswrapper[4930]: I1124 12:30:47.614595 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n2bqw" podStartSLOduration=3.178446721 podStartE2EDuration="5.614578849s" podCreationTimestamp="2025-11-24 12:30:42 +0000 UTC" firstStartedPulling="2025-11-24 12:30:44.538714562 +0000 UTC m=+1891.153042512" lastFinishedPulling="2025-11-24 12:30:46.97484669 +0000 UTC m=+1893.589174640" observedRunningTime="2025-11-24 12:30:47.608986697 +0000 UTC m=+1894.223314647" watchObservedRunningTime="2025-11-24 12:30:47.614578849 +0000 UTC m=+1894.228906789" Nov 24 12:30:48 crc kubenswrapper[4930]: I1124 12:30:48.601089 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxkkj" event={"ID":"0898c92f-ad7b-49b4-9111-2abe65697122","Type":"ContainerStarted","Data":"a105aab136e32d563f30433e1203494bfd440f6656042880c4dc00097c8c247b"} Nov 24 12:30:48 crc kubenswrapper[4930]: I1124 12:30:48.604945 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6w82" event={"ID":"4d609896-3f79-4a48-801a-1d8919f1066d","Type":"ContainerStarted","Data":"b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18"} Nov 24 12:30:49 crc kubenswrapper[4930]: I1124 12:30:49.616146 4930 generic.go:334] "Generic (PLEG): container finished" podID="0898c92f-ad7b-49b4-9111-2abe65697122" containerID="a105aab136e32d563f30433e1203494bfd440f6656042880c4dc00097c8c247b" exitCode=0 Nov 24 12:30:49 crc kubenswrapper[4930]: I1124 12:30:49.616197 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxkkj" event={"ID":"0898c92f-ad7b-49b4-9111-2abe65697122","Type":"ContainerDied","Data":"a105aab136e32d563f30433e1203494bfd440f6656042880c4dc00097c8c247b"} Nov 24 12:30:49 crc kubenswrapper[4930]: I1124 12:30:49.620741 4930 generic.go:334] "Generic (PLEG): container finished" podID="4d609896-3f79-4a48-801a-1d8919f1066d" containerID="b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18" exitCode=0 Nov 24 12:30:49 crc kubenswrapper[4930]: I1124 12:30:49.620778 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6w82" event={"ID":"4d609896-3f79-4a48-801a-1d8919f1066d","Type":"ContainerDied","Data":"b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18"} Nov 24 12:30:49 crc kubenswrapper[4930]: I1124 12:30:49.620802 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6w82" event={"ID":"4d609896-3f79-4a48-801a-1d8919f1066d","Type":"ContainerStarted","Data":"eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad"} Nov 24 12:30:49 crc kubenswrapper[4930]: I1124 12:30:49.654515 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k6w82" podStartSLOduration=4.157390819 podStartE2EDuration="5.654490852s" podCreationTimestamp="2025-11-24 12:30:44 +0000 UTC" firstStartedPulling="2025-11-24 12:30:47.577362484 +0000 UTC m=+1894.191690434" lastFinishedPulling="2025-11-24 12:30:49.074462527 +0000 UTC m=+1895.688790467" observedRunningTime="2025-11-24 12:30:49.65027508 +0000 UTC m=+1896.264603030" watchObservedRunningTime="2025-11-24 12:30:49.654490852 +0000 UTC m=+1896.268818832" Nov 24 12:30:50 crc kubenswrapper[4930]: I1124 12:30:50.632608 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxkkj" event={"ID":"0898c92f-ad7b-49b4-9111-2abe65697122","Type":"ContainerStarted","Data":"9a77669bfd25dd8af658f1bd489874c14238b8a473d90e2b7602efc1544d12db"} Nov 24 12:30:50 crc kubenswrapper[4930]: I1124 12:30:50.665244 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rxkkj" podStartSLOduration=4.252409353 podStartE2EDuration="6.665217417s" podCreationTimestamp="2025-11-24 12:30:44 +0000 UTC" firstStartedPulling="2025-11-24 12:30:47.582305017 +0000 UTC m=+1894.196632977" lastFinishedPulling="2025-11-24 12:30:49.995113091 +0000 UTC m=+1896.609441041" observedRunningTime="2025-11-24 12:30:50.655532647 +0000 UTC m=+1897.269860617" watchObservedRunningTime="2025-11-24 12:30:50.665217417 +0000 UTC m=+1897.279545367" Nov 24 12:30:52 crc kubenswrapper[4930]: I1124 12:30:52.714618 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:52 crc kubenswrapper[4930]: I1124 12:30:52.715595 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:30:52 crc kubenswrapper[4930]: I1124 12:30:52.917049 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:52 crc kubenswrapper[4930]: I1124 12:30:52.917094 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:52 crc kubenswrapper[4930]: I1124 12:30:52.961507 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:53 crc kubenswrapper[4930]: I1124 12:30:53.084395 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:30:53 crc kubenswrapper[4930]: E1124 12:30:53.084665 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:30:53 crc kubenswrapper[4930]: I1124 12:30:53.700604 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:53 crc kubenswrapper[4930]: I1124 12:30:53.760714 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mf6cd" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="registry-server" probeResult="failure" output=< Nov 24 12:30:53 crc kubenswrapper[4930]: timeout: failed to connect service ":50051" within 1s Nov 24 12:30:53 crc kubenswrapper[4930]: > Nov 24 12:30:54 crc kubenswrapper[4930]: I1124 12:30:54.560388 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n2bqw"] Nov 24 12:30:55 crc kubenswrapper[4930]: I1124 12:30:55.106674 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:55 crc kubenswrapper[4930]: I1124 12:30:55.107009 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:55 crc kubenswrapper[4930]: I1124 12:30:55.152963 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:55 crc kubenswrapper[4930]: I1124 12:30:55.317736 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:55 crc kubenswrapper[4930]: I1124 12:30:55.317793 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:55 crc kubenswrapper[4930]: I1124 12:30:55.365698 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:55 crc kubenswrapper[4930]: I1124 12:30:55.673961 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n2bqw" podUID="4a0ee175-51ac-4313-bef6-278021bf7077" containerName="registry-server" containerID="cri-o://2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760" gracePeriod=2 Nov 24 12:30:55 crc kubenswrapper[4930]: I1124 12:30:55.730978 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:55 crc kubenswrapper[4930]: I1124 12:30:55.731743 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.167091 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.316030 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-catalog-content\") pod \"4a0ee175-51ac-4313-bef6-278021bf7077\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.316402 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-utilities\") pod \"4a0ee175-51ac-4313-bef6-278021bf7077\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.316566 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw7sb\" (UniqueName: \"kubernetes.io/projected/4a0ee175-51ac-4313-bef6-278021bf7077-kube-api-access-hw7sb\") pod \"4a0ee175-51ac-4313-bef6-278021bf7077\" (UID: \"4a0ee175-51ac-4313-bef6-278021bf7077\") " Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.317074 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-utilities" (OuterVolumeSpecName: "utilities") pod "4a0ee175-51ac-4313-bef6-278021bf7077" (UID: "4a0ee175-51ac-4313-bef6-278021bf7077"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.325732 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a0ee175-51ac-4313-bef6-278021bf7077-kube-api-access-hw7sb" (OuterVolumeSpecName: "kube-api-access-hw7sb") pod "4a0ee175-51ac-4313-bef6-278021bf7077" (UID: "4a0ee175-51ac-4313-bef6-278021bf7077"). InnerVolumeSpecName "kube-api-access-hw7sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.388708 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a0ee175-51ac-4313-bef6-278021bf7077" (UID: "4a0ee175-51ac-4313-bef6-278021bf7077"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.419463 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.419504 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0ee175-51ac-4313-bef6-278021bf7077-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.419517 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw7sb\" (UniqueName: \"kubernetes.io/projected/4a0ee175-51ac-4313-bef6-278021bf7077-kube-api-access-hw7sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.685796 4930 generic.go:334] "Generic (PLEG): container finished" podID="4a0ee175-51ac-4313-bef6-278021bf7077" containerID="2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760" exitCode=0 Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.685864 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2bqw" event={"ID":"4a0ee175-51ac-4313-bef6-278021bf7077","Type":"ContainerDied","Data":"2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760"} Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.685905 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2bqw" event={"ID":"4a0ee175-51ac-4313-bef6-278021bf7077","Type":"ContainerDied","Data":"18981ba9ec48e3491cc1673b5326c7f92ff5a6125d36115af00f62eb368f226e"} Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.685929 4930 scope.go:117] "RemoveContainer" containerID="2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.686866 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n2bqw" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.714706 4930 scope.go:117] "RemoveContainer" containerID="ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.726943 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n2bqw"] Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.739893 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n2bqw"] Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.775835 4930 scope.go:117] "RemoveContainer" containerID="423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.814629 4930 scope.go:117] "RemoveContainer" containerID="2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760" Nov 24 12:30:56 crc kubenswrapper[4930]: E1124 12:30:56.815252 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760\": container with ID starting with 2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760 not found: ID does not exist" containerID="2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.815346 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760"} err="failed to get container status \"2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760\": rpc error: code = NotFound desc = could not find container \"2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760\": container with ID starting with 2a90d8a004bb7a27f1e032bdef65bab6e7d55559be8d86c6b1e8d447183c8760 not found: ID does not exist" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.815403 4930 scope.go:117] "RemoveContainer" containerID="ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b" Nov 24 12:30:56 crc kubenswrapper[4930]: E1124 12:30:56.816135 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b\": container with ID starting with ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b not found: ID does not exist" containerID="ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.816187 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b"} err="failed to get container status \"ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b\": rpc error: code = NotFound desc = could not find container \"ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b\": container with ID starting with ffc4e53ed7a0d88666bbca43928155415345f477234284085e813692640a474b not found: ID does not exist" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.816231 4930 scope.go:117] "RemoveContainer" containerID="423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d" Nov 24 12:30:56 crc kubenswrapper[4930]: E1124 12:30:56.816682 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d\": container with ID starting with 423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d not found: ID does not exist" containerID="423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d" Nov 24 12:30:56 crc kubenswrapper[4930]: I1124 12:30:56.816710 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d"} err="failed to get container status \"423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d\": rpc error: code = NotFound desc = could not find container \"423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d\": container with ID starting with 423682c0402afaec998cc4966ad73a4e1b26642809fe49556ddf0c960f8af88d not found: ID does not exist" Nov 24 12:30:57 crc kubenswrapper[4930]: I1124 12:30:57.556769 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k6w82"] Nov 24 12:30:57 crc kubenswrapper[4930]: I1124 12:30:57.695617 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k6w82" podUID="4d609896-3f79-4a48-801a-1d8919f1066d" containerName="registry-server" containerID="cri-o://eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad" gracePeriod=2 Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.095570 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a0ee175-51ac-4313-bef6-278021bf7077" path="/var/lib/kubelet/pods/4a0ee175-51ac-4313-bef6-278021bf7077/volumes" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.174365 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.255995 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6jm8\" (UniqueName: \"kubernetes.io/projected/4d609896-3f79-4a48-801a-1d8919f1066d-kube-api-access-v6jm8\") pod \"4d609896-3f79-4a48-801a-1d8919f1066d\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.256211 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-catalog-content\") pod \"4d609896-3f79-4a48-801a-1d8919f1066d\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.256285 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-utilities\") pod \"4d609896-3f79-4a48-801a-1d8919f1066d\" (UID: \"4d609896-3f79-4a48-801a-1d8919f1066d\") " Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.257469 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-utilities" (OuterVolumeSpecName: "utilities") pod "4d609896-3f79-4a48-801a-1d8919f1066d" (UID: "4d609896-3f79-4a48-801a-1d8919f1066d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.261834 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d609896-3f79-4a48-801a-1d8919f1066d-kube-api-access-v6jm8" (OuterVolumeSpecName: "kube-api-access-v6jm8") pod "4d609896-3f79-4a48-801a-1d8919f1066d" (UID: "4d609896-3f79-4a48-801a-1d8919f1066d"). InnerVolumeSpecName "kube-api-access-v6jm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.302605 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d609896-3f79-4a48-801a-1d8919f1066d" (UID: "4d609896-3f79-4a48-801a-1d8919f1066d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.358845 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.358879 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6jm8\" (UniqueName: \"kubernetes.io/projected/4d609896-3f79-4a48-801a-1d8919f1066d-kube-api-access-v6jm8\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.358902 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d609896-3f79-4a48-801a-1d8919f1066d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.576286 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxkkj"] Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.576793 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rxkkj" podUID="0898c92f-ad7b-49b4-9111-2abe65697122" containerName="registry-server" containerID="cri-o://9a77669bfd25dd8af658f1bd489874c14238b8a473d90e2b7602efc1544d12db" gracePeriod=2 Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.707182 4930 generic.go:334] "Generic (PLEG): container finished" podID="4d609896-3f79-4a48-801a-1d8919f1066d" containerID="eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad" exitCode=0 Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.707231 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6w82" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.707709 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6w82" event={"ID":"4d609896-3f79-4a48-801a-1d8919f1066d","Type":"ContainerDied","Data":"eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad"} Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.707901 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6w82" event={"ID":"4d609896-3f79-4a48-801a-1d8919f1066d","Type":"ContainerDied","Data":"c89482447542487eff59c334aea7d64441193cd51ecf0720db5d0911f72e3319"} Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.707928 4930 scope.go:117] "RemoveContainer" containerID="eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.710513 4930 generic.go:334] "Generic (PLEG): container finished" podID="0898c92f-ad7b-49b4-9111-2abe65697122" containerID="9a77669bfd25dd8af658f1bd489874c14238b8a473d90e2b7602efc1544d12db" exitCode=0 Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.710636 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxkkj" event={"ID":"0898c92f-ad7b-49b4-9111-2abe65697122","Type":"ContainerDied","Data":"9a77669bfd25dd8af658f1bd489874c14238b8a473d90e2b7602efc1544d12db"} Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.755483 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k6w82"] Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.762016 4930 scope.go:117] "RemoveContainer" containerID="b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.764092 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k6w82"] Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.780727 4930 scope.go:117] "RemoveContainer" containerID="238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.808913 4930 scope.go:117] "RemoveContainer" containerID="eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad" Nov 24 12:30:58 crc kubenswrapper[4930]: E1124 12:30:58.809310 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad\": container with ID starting with eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad not found: ID does not exist" containerID="eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.809335 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad"} err="failed to get container status \"eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad\": rpc error: code = NotFound desc = could not find container \"eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad\": container with ID starting with eb93bf50ff431e85d4da3b30bb051d587cd5344553a57683d13f87e940cee5ad not found: ID does not exist" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.809353 4930 scope.go:117] "RemoveContainer" containerID="b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18" Nov 24 12:30:58 crc kubenswrapper[4930]: E1124 12:30:58.809582 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18\": container with ID starting with b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18 not found: ID does not exist" containerID="b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.809596 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18"} err="failed to get container status \"b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18\": rpc error: code = NotFound desc = could not find container \"b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18\": container with ID starting with b0a831eb163935de5ad203b55decf91ef6527840d6fcc8ec91817d6f7a24ef18 not found: ID does not exist" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.809608 4930 scope.go:117] "RemoveContainer" containerID="238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55" Nov 24 12:30:58 crc kubenswrapper[4930]: E1124 12:30:58.809813 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55\": container with ID starting with 238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55 not found: ID does not exist" containerID="238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55" Nov 24 12:30:58 crc kubenswrapper[4930]: I1124 12:30:58.809827 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55"} err="failed to get container status \"238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55\": rpc error: code = NotFound desc = could not find container \"238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55\": container with ID starting with 238af833659806a3d08829689cca0b94e01376ca721cbe62cbe3489e66d9fa55 not found: ID does not exist" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.011787 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.187962 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njgw9\" (UniqueName: \"kubernetes.io/projected/0898c92f-ad7b-49b4-9111-2abe65697122-kube-api-access-njgw9\") pod \"0898c92f-ad7b-49b4-9111-2abe65697122\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.188011 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-utilities\") pod \"0898c92f-ad7b-49b4-9111-2abe65697122\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.188037 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-catalog-content\") pod \"0898c92f-ad7b-49b4-9111-2abe65697122\" (UID: \"0898c92f-ad7b-49b4-9111-2abe65697122\") " Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.189099 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-utilities" (OuterVolumeSpecName: "utilities") pod "0898c92f-ad7b-49b4-9111-2abe65697122" (UID: "0898c92f-ad7b-49b4-9111-2abe65697122"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.193747 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0898c92f-ad7b-49b4-9111-2abe65697122-kube-api-access-njgw9" (OuterVolumeSpecName: "kube-api-access-njgw9") pod "0898c92f-ad7b-49b4-9111-2abe65697122" (UID: "0898c92f-ad7b-49b4-9111-2abe65697122"). InnerVolumeSpecName "kube-api-access-njgw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.206492 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0898c92f-ad7b-49b4-9111-2abe65697122" (UID: "0898c92f-ad7b-49b4-9111-2abe65697122"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.290496 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njgw9\" (UniqueName: \"kubernetes.io/projected/0898c92f-ad7b-49b4-9111-2abe65697122-kube-api-access-njgw9\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.290565 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.290582 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0898c92f-ad7b-49b4-9111-2abe65697122-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.722728 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxkkj" event={"ID":"0898c92f-ad7b-49b4-9111-2abe65697122","Type":"ContainerDied","Data":"5566cb5c2049e20bdbc56c5c66318458858a8bf2d51676580daf9eda1e2399d5"} Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.722774 4930 scope.go:117] "RemoveContainer" containerID="9a77669bfd25dd8af658f1bd489874c14238b8a473d90e2b7602efc1544d12db" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.722776 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxkkj" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.743387 4930 scope.go:117] "RemoveContainer" containerID="a105aab136e32d563f30433e1203494bfd440f6656042880c4dc00097c8c247b" Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.760900 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxkkj"] Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.769311 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxkkj"] Nov 24 12:30:59 crc kubenswrapper[4930]: I1124 12:30:59.792378 4930 scope.go:117] "RemoveContainer" containerID="2673e23b59c34902c1e104f46731ebd11d2120911f63739e35a3b3201cc34764" Nov 24 12:31:00 crc kubenswrapper[4930]: I1124 12:31:00.097831 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0898c92f-ad7b-49b4-9111-2abe65697122" path="/var/lib/kubelet/pods/0898c92f-ad7b-49b4-9111-2abe65697122/volumes" Nov 24 12:31:00 crc kubenswrapper[4930]: I1124 12:31:00.098653 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d609896-3f79-4a48-801a-1d8919f1066d" path="/var/lib/kubelet/pods/4d609896-3f79-4a48-801a-1d8919f1066d/volumes" Nov 24 12:31:03 crc kubenswrapper[4930]: I1124 12:31:03.761765 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mf6cd" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="registry-server" probeResult="failure" output=< Nov 24 12:31:03 crc kubenswrapper[4930]: timeout: failed to connect service ":50051" within 1s Nov 24 12:31:03 crc kubenswrapper[4930]: > Nov 24 12:31:06 crc kubenswrapper[4930]: I1124 12:31:06.084966 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:31:06 crc kubenswrapper[4930]: E1124 12:31:06.085583 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:31:13 crc kubenswrapper[4930]: I1124 12:31:13.758717 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mf6cd" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="registry-server" probeResult="failure" output=< Nov 24 12:31:13 crc kubenswrapper[4930]: timeout: failed to connect service ":50051" within 1s Nov 24 12:31:13 crc kubenswrapper[4930]: > Nov 24 12:31:21 crc kubenswrapper[4930]: I1124 12:31:21.085332 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:31:21 crc kubenswrapper[4930]: E1124 12:31:21.086114 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:31:23 crc kubenswrapper[4930]: I1124 12:31:23.755626 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mf6cd" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="registry-server" probeResult="failure" output=< Nov 24 12:31:23 crc kubenswrapper[4930]: timeout: failed to connect service ":50051" within 1s Nov 24 12:31:23 crc kubenswrapper[4930]: > Nov 24 12:31:29 crc kubenswrapper[4930]: I1124 12:31:29.997162 4930 generic.go:334] "Generic (PLEG): container finished" podID="48d052f4-e44f-45e2-856a-08346f84f5b8" containerID="145a4b0fce448e23af73b890a6c6c81b1a8e308db8e6f35dce6dd86ef56d034e" exitCode=0 Nov 24 12:31:29 crc kubenswrapper[4930]: I1124 12:31:29.997281 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" event={"ID":"48d052f4-e44f-45e2-856a-08346f84f5b8","Type":"ContainerDied","Data":"145a4b0fce448e23af73b890a6c6c81b1a8e308db8e6f35dce6dd86ef56d034e"} Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.406623 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.407891 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh8z8\" (UniqueName: \"kubernetes.io/projected/48d052f4-e44f-45e2-856a-08346f84f5b8-kube-api-access-jh8z8\") pod \"48d052f4-e44f-45e2-856a-08346f84f5b8\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.414262 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d052f4-e44f-45e2-856a-08346f84f5b8-kube-api-access-jh8z8" (OuterVolumeSpecName: "kube-api-access-jh8z8") pod "48d052f4-e44f-45e2-856a-08346f84f5b8" (UID: "48d052f4-e44f-45e2-856a-08346f84f5b8"). InnerVolumeSpecName "kube-api-access-jh8z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.509488 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/48d052f4-e44f-45e2-856a-08346f84f5b8-ovncontroller-config-0\") pod \"48d052f4-e44f-45e2-856a-08346f84f5b8\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.509549 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ssh-key\") pod \"48d052f4-e44f-45e2-856a-08346f84f5b8\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.509582 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-inventory\") pod \"48d052f4-e44f-45e2-856a-08346f84f5b8\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.509671 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ovn-combined-ca-bundle\") pod \"48d052f4-e44f-45e2-856a-08346f84f5b8\" (UID: \"48d052f4-e44f-45e2-856a-08346f84f5b8\") " Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.511172 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh8z8\" (UniqueName: \"kubernetes.io/projected/48d052f4-e44f-45e2-856a-08346f84f5b8-kube-api-access-jh8z8\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.514837 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "48d052f4-e44f-45e2-856a-08346f84f5b8" (UID: "48d052f4-e44f-45e2-856a-08346f84f5b8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.535257 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48d052f4-e44f-45e2-856a-08346f84f5b8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "48d052f4-e44f-45e2-856a-08346f84f5b8" (UID: "48d052f4-e44f-45e2-856a-08346f84f5b8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.536523 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "48d052f4-e44f-45e2-856a-08346f84f5b8" (UID: "48d052f4-e44f-45e2-856a-08346f84f5b8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.538417 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-inventory" (OuterVolumeSpecName: "inventory") pod "48d052f4-e44f-45e2-856a-08346f84f5b8" (UID: "48d052f4-e44f-45e2-856a-08346f84f5b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.613836 4930 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/48d052f4-e44f-45e2-856a-08346f84f5b8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.613879 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.613892 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:31 crc kubenswrapper[4930]: I1124 12:31:31.613904 4930 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d052f4-e44f-45e2-856a-08346f84f5b8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.015231 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" event={"ID":"48d052f4-e44f-45e2-856a-08346f84f5b8","Type":"ContainerDied","Data":"6ee38d247b952187da36f37a86e0c6c4527594d6093039f9a417d0d0b88c15b7"} Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.015272 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee38d247b952187da36f37a86e0c6c4527594d6093039f9a417d0d0b88c15b7" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.015276 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t562k" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.219488 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc"] Nov 24 12:31:32 crc kubenswrapper[4930]: E1124 12:31:32.220177 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d609896-3f79-4a48-801a-1d8919f1066d" containerName="extract-content" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220195 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d609896-3f79-4a48-801a-1d8919f1066d" containerName="extract-content" Nov 24 12:31:32 crc kubenswrapper[4930]: E1124 12:31:32.220203 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d609896-3f79-4a48-801a-1d8919f1066d" containerName="registry-server" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220209 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d609896-3f79-4a48-801a-1d8919f1066d" containerName="registry-server" Nov 24 12:31:32 crc kubenswrapper[4930]: E1124 12:31:32.220231 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0ee175-51ac-4313-bef6-278021bf7077" containerName="extract-utilities" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220240 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0ee175-51ac-4313-bef6-278021bf7077" containerName="extract-utilities" Nov 24 12:31:32 crc kubenswrapper[4930]: E1124 12:31:32.220254 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d609896-3f79-4a48-801a-1d8919f1066d" containerName="extract-utilities" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220261 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d609896-3f79-4a48-801a-1d8919f1066d" containerName="extract-utilities" Nov 24 12:31:32 crc kubenswrapper[4930]: E1124 12:31:32.220276 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0ee175-51ac-4313-bef6-278021bf7077" containerName="extract-content" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220283 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0ee175-51ac-4313-bef6-278021bf7077" containerName="extract-content" Nov 24 12:31:32 crc kubenswrapper[4930]: E1124 12:31:32.220294 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0898c92f-ad7b-49b4-9111-2abe65697122" containerName="registry-server" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220299 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0898c92f-ad7b-49b4-9111-2abe65697122" containerName="registry-server" Nov 24 12:31:32 crc kubenswrapper[4930]: E1124 12:31:32.220308 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0898c92f-ad7b-49b4-9111-2abe65697122" containerName="extract-utilities" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220314 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0898c92f-ad7b-49b4-9111-2abe65697122" containerName="extract-utilities" Nov 24 12:31:32 crc kubenswrapper[4930]: E1124 12:31:32.220333 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d052f4-e44f-45e2-856a-08346f84f5b8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220339 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d052f4-e44f-45e2-856a-08346f84f5b8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 12:31:32 crc kubenswrapper[4930]: E1124 12:31:32.220350 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0898c92f-ad7b-49b4-9111-2abe65697122" containerName="extract-content" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220355 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0898c92f-ad7b-49b4-9111-2abe65697122" containerName="extract-content" Nov 24 12:31:32 crc kubenswrapper[4930]: E1124 12:31:32.220367 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0ee175-51ac-4313-bef6-278021bf7077" containerName="registry-server" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220373 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0ee175-51ac-4313-bef6-278021bf7077" containerName="registry-server" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220561 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="0898c92f-ad7b-49b4-9111-2abe65697122" containerName="registry-server" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220583 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d052f4-e44f-45e2-856a-08346f84f5b8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220596 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0ee175-51ac-4313-bef6-278021bf7077" containerName="registry-server" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.220607 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d609896-3f79-4a48-801a-1d8919f1066d" containerName="registry-server" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.221319 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.223274 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.223348 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.223373 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.223407 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjw9t\" (UniqueName: \"kubernetes.io/projected/2601017f-22e2-4b92-a224-ea216464d20a-kube-api-access-gjw9t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.223435 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.223506 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.223995 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.224353 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.224497 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.225443 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.225519 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.225662 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.229263 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc"] Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.326030 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.326094 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.326145 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjw9t\" (UniqueName: \"kubernetes.io/projected/2601017f-22e2-4b92-a224-ea216464d20a-kube-api-access-gjw9t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.326182 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.326253 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.326358 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.329756 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.330307 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.330901 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.331874 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.340124 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.350554 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjw9t\" (UniqueName: \"kubernetes.io/projected/2601017f-22e2-4b92-a224-ea216464d20a-kube-api-access-gjw9t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.542862 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.765167 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:31:32 crc kubenswrapper[4930]: I1124 12:31:32.811248 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:31:33 crc kubenswrapper[4930]: I1124 12:31:33.003158 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mf6cd"] Nov 24 12:31:33 crc kubenswrapper[4930]: I1124 12:31:33.045724 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc"] Nov 24 12:31:33 crc kubenswrapper[4930]: I1124 12:31:33.049331 4930 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:31:33 crc kubenswrapper[4930]: I1124 12:31:33.084751 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:31:33 crc kubenswrapper[4930]: E1124 12:31:33.085054 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.038009 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" event={"ID":"2601017f-22e2-4b92-a224-ea216464d20a","Type":"ContainerStarted","Data":"511a98cf25e50fe6b5bad84d450c36438634cdbc402a3efc47bd6b302f8cc8b6"} Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.038448 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" event={"ID":"2601017f-22e2-4b92-a224-ea216464d20a","Type":"ContainerStarted","Data":"faa9686a9b08925394aa2140b10ac2ec38717f958d1988c4157e4e0bf6beb372"} Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.038190 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mf6cd" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="registry-server" containerID="cri-o://4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a" gracePeriod=2 Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.059428 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" podStartSLOduration=1.443244484 podStartE2EDuration="2.059411443s" podCreationTimestamp="2025-11-24 12:31:32 +0000 UTC" firstStartedPulling="2025-11-24 12:31:33.049135451 +0000 UTC m=+1939.663463401" lastFinishedPulling="2025-11-24 12:31:33.66530241 +0000 UTC m=+1940.279630360" observedRunningTime="2025-11-24 12:31:34.05548201 +0000 UTC m=+1940.669809990" watchObservedRunningTime="2025-11-24 12:31:34.059411443 +0000 UTC m=+1940.673739383" Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.473610 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.685724 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktc7j\" (UniqueName: \"kubernetes.io/projected/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-kube-api-access-ktc7j\") pod \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.688101 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-catalog-content\") pod \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.688376 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-utilities\") pod \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\" (UID: \"9d23c44c-6fd4-463b-8ed1-2f138fd0122c\") " Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.689190 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-utilities" (OuterVolumeSpecName: "utilities") pod "9d23c44c-6fd4-463b-8ed1-2f138fd0122c" (UID: "9d23c44c-6fd4-463b-8ed1-2f138fd0122c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.692635 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-kube-api-access-ktc7j" (OuterVolumeSpecName: "kube-api-access-ktc7j") pod "9d23c44c-6fd4-463b-8ed1-2f138fd0122c" (UID: "9d23c44c-6fd4-463b-8ed1-2f138fd0122c"). InnerVolumeSpecName "kube-api-access-ktc7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.783839 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d23c44c-6fd4-463b-8ed1-2f138fd0122c" (UID: "9d23c44c-6fd4-463b-8ed1-2f138fd0122c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.791481 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.791523 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:34 crc kubenswrapper[4930]: I1124 12:31:34.791551 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktc7j\" (UniqueName: \"kubernetes.io/projected/9d23c44c-6fd4-463b-8ed1-2f138fd0122c-kube-api-access-ktc7j\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.054477 4930 generic.go:334] "Generic (PLEG): container finished" podID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerID="4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a" exitCode=0 Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.054610 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mf6cd" Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.054530 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf6cd" event={"ID":"9d23c44c-6fd4-463b-8ed1-2f138fd0122c","Type":"ContainerDied","Data":"4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a"} Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.054828 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf6cd" event={"ID":"9d23c44c-6fd4-463b-8ed1-2f138fd0122c","Type":"ContainerDied","Data":"b2cd52af513054657a8e7820f436b979c3d170c9e149a42b890fb6903bc4492b"} Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.054860 4930 scope.go:117] "RemoveContainer" containerID="4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a" Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.083050 4930 scope.go:117] "RemoveContainer" containerID="bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa" Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.123938 4930 scope.go:117] "RemoveContainer" containerID="c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031" Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.128252 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mf6cd"] Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.138158 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mf6cd"] Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.179223 4930 scope.go:117] "RemoveContainer" containerID="4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a" Nov 24 12:31:35 crc kubenswrapper[4930]: E1124 12:31:35.179594 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a\": container with ID starting with 4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a not found: ID does not exist" containerID="4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a" Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.179660 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a"} err="failed to get container status \"4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a\": rpc error: code = NotFound desc = could not find container \"4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a\": container with ID starting with 4f44673e1727d4d66c43868d002b90edf4520737c9c398632cfd46504efba80a not found: ID does not exist" Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.179684 4930 scope.go:117] "RemoveContainer" containerID="bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa" Nov 24 12:31:35 crc kubenswrapper[4930]: E1124 12:31:35.180225 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa\": container with ID starting with bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa not found: ID does not exist" containerID="bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa" Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.180279 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa"} err="failed to get container status \"bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa\": rpc error: code = NotFound desc = could not find container \"bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa\": container with ID starting with bdb52ea2a5b513602e9aaa5196653db8856aaf888a6858a935f130e6fd7358fa not found: ID does not exist" Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.180310 4930 scope.go:117] "RemoveContainer" containerID="c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031" Nov 24 12:31:35 crc kubenswrapper[4930]: E1124 12:31:35.180771 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031\": container with ID starting with c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031 not found: ID does not exist" containerID="c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031" Nov 24 12:31:35 crc kubenswrapper[4930]: I1124 12:31:35.180798 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031"} err="failed to get container status \"c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031\": rpc error: code = NotFound desc = could not find container \"c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031\": container with ID starting with c2a724ca9b74ce5cf96b139973f2153746b327d70c694fdff994132cd51d9031 not found: ID does not exist" Nov 24 12:31:36 crc kubenswrapper[4930]: I1124 12:31:36.095597 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" path="/var/lib/kubelet/pods/9d23c44c-6fd4-463b-8ed1-2f138fd0122c/volumes" Nov 24 12:31:44 crc kubenswrapper[4930]: I1124 12:31:44.098107 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:31:44 crc kubenswrapper[4930]: E1124 12:31:44.099314 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:31:55 crc kubenswrapper[4930]: I1124 12:31:55.084411 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:31:55 crc kubenswrapper[4930]: E1124 12:31:55.085375 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:32:07 crc kubenswrapper[4930]: I1124 12:32:07.085431 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:32:07 crc kubenswrapper[4930]: E1124 12:32:07.086726 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:32:19 crc kubenswrapper[4930]: I1124 12:32:19.472486 4930 generic.go:334] "Generic (PLEG): container finished" podID="2601017f-22e2-4b92-a224-ea216464d20a" containerID="511a98cf25e50fe6b5bad84d450c36438634cdbc402a3efc47bd6b302f8cc8b6" exitCode=0 Nov 24 12:32:19 crc kubenswrapper[4930]: I1124 12:32:19.472595 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" event={"ID":"2601017f-22e2-4b92-a224-ea216464d20a","Type":"ContainerDied","Data":"511a98cf25e50fe6b5bad84d450c36438634cdbc402a3efc47bd6b302f8cc8b6"} Nov 24 12:32:20 crc kubenswrapper[4930]: I1124 12:32:20.899659 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.059069 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjw9t\" (UniqueName: \"kubernetes.io/projected/2601017f-22e2-4b92-a224-ea216464d20a-kube-api-access-gjw9t\") pod \"2601017f-22e2-4b92-a224-ea216464d20a\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.059150 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-nova-metadata-neutron-config-0\") pod \"2601017f-22e2-4b92-a224-ea216464d20a\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.059220 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-inventory\") pod \"2601017f-22e2-4b92-a224-ea216464d20a\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.059879 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-metadata-combined-ca-bundle\") pod \"2601017f-22e2-4b92-a224-ea216464d20a\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.059988 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"2601017f-22e2-4b92-a224-ea216464d20a\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.060043 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-ssh-key\") pod \"2601017f-22e2-4b92-a224-ea216464d20a\" (UID: \"2601017f-22e2-4b92-a224-ea216464d20a\") " Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.065112 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2601017f-22e2-4b92-a224-ea216464d20a-kube-api-access-gjw9t" (OuterVolumeSpecName: "kube-api-access-gjw9t") pod "2601017f-22e2-4b92-a224-ea216464d20a" (UID: "2601017f-22e2-4b92-a224-ea216464d20a"). InnerVolumeSpecName "kube-api-access-gjw9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.065652 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2601017f-22e2-4b92-a224-ea216464d20a" (UID: "2601017f-22e2-4b92-a224-ea216464d20a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.091515 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2601017f-22e2-4b92-a224-ea216464d20a" (UID: "2601017f-22e2-4b92-a224-ea216464d20a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.091610 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "2601017f-22e2-4b92-a224-ea216464d20a" (UID: "2601017f-22e2-4b92-a224-ea216464d20a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.091683 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-inventory" (OuterVolumeSpecName: "inventory") pod "2601017f-22e2-4b92-a224-ea216464d20a" (UID: "2601017f-22e2-4b92-a224-ea216464d20a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.099955 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "2601017f-22e2-4b92-a224-ea216464d20a" (UID: "2601017f-22e2-4b92-a224-ea216464d20a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.162728 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.162767 4930 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.162782 4930 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.162798 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.162811 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjw9t\" (UniqueName: \"kubernetes.io/projected/2601017f-22e2-4b92-a224-ea216464d20a-kube-api-access-gjw9t\") on node \"crc\" DevicePath \"\"" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.162822 4930 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2601017f-22e2-4b92-a224-ea216464d20a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.502478 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" event={"ID":"2601017f-22e2-4b92-a224-ea216464d20a","Type":"ContainerDied","Data":"faa9686a9b08925394aa2140b10ac2ec38717f958d1988c4157e4e0bf6beb372"} Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.502524 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faa9686a9b08925394aa2140b10ac2ec38717f958d1988c4157e4e0bf6beb372" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.502622 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.609340 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7"] Nov 24 12:32:21 crc kubenswrapper[4930]: E1124 12:32:21.609849 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="registry-server" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.609871 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="registry-server" Nov 24 12:32:21 crc kubenswrapper[4930]: E1124 12:32:21.609907 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="extract-content" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.609916 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="extract-content" Nov 24 12:32:21 crc kubenswrapper[4930]: E1124 12:32:21.609931 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2601017f-22e2-4b92-a224-ea216464d20a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.609943 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="2601017f-22e2-4b92-a224-ea216464d20a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 12:32:21 crc kubenswrapper[4930]: E1124 12:32:21.609977 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="extract-utilities" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.609986 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="extract-utilities" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.610211 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d23c44c-6fd4-463b-8ed1-2f138fd0122c" containerName="registry-server" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.610255 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="2601017f-22e2-4b92-a224-ea216464d20a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.611256 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.613679 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.614077 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.614438 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.614577 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.616575 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.633075 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7"] Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.672962 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.673045 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.673217 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.673411 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czp8w\" (UniqueName: \"kubernetes.io/projected/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-kube-api-access-czp8w\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.673705 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.775918 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.775975 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.776019 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.776077 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czp8w\" (UniqueName: \"kubernetes.io/projected/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-kube-api-access-czp8w\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.776137 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.779940 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.785192 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.785212 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.789819 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.795653 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czp8w\" (UniqueName: \"kubernetes.io/projected/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-kube-api-access-czp8w\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:21 crc kubenswrapper[4930]: I1124 12:32:21.930731 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:32:22 crc kubenswrapper[4930]: I1124 12:32:22.084936 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:32:22 crc kubenswrapper[4930]: E1124 12:32:22.085615 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:32:22 crc kubenswrapper[4930]: I1124 12:32:22.420215 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7"] Nov 24 12:32:22 crc kubenswrapper[4930]: I1124 12:32:22.510833 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" event={"ID":"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c","Type":"ContainerStarted","Data":"dcf7b150122efc9d7ab36dd7276ffbaddf318e50d72a6d2b6c4cc01113f33c18"} Nov 24 12:32:23 crc kubenswrapper[4930]: I1124 12:32:23.521222 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" event={"ID":"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c","Type":"ContainerStarted","Data":"d0c7e4f1aeb2fc86ec857720acb4087d7a01573d0be6b6de36f3bea0609882ad"} Nov 24 12:32:23 crc kubenswrapper[4930]: I1124 12:32:23.547179 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" podStartSLOduration=1.829620476 podStartE2EDuration="2.547160399s" podCreationTimestamp="2025-11-24 12:32:21 +0000 UTC" firstStartedPulling="2025-11-24 12:32:22.426372443 +0000 UTC m=+1989.040700393" lastFinishedPulling="2025-11-24 12:32:23.143912366 +0000 UTC m=+1989.758240316" observedRunningTime="2025-11-24 12:32:23.543313598 +0000 UTC m=+1990.157641548" watchObservedRunningTime="2025-11-24 12:32:23.547160399 +0000 UTC m=+1990.161488349" Nov 24 12:32:33 crc kubenswrapper[4930]: I1124 12:32:33.085277 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:32:33 crc kubenswrapper[4930]: I1124 12:32:33.613164 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"e3c2378bf43082c6aae9cc114616a7eab51c58a092b8e698973e1d773ba4df0a"} Nov 24 12:35:01 crc kubenswrapper[4930]: I1124 12:35:01.808871 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:35:01 crc kubenswrapper[4930]: I1124 12:35:01.809568 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:35:31 crc kubenswrapper[4930]: I1124 12:35:31.809407 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:35:31 crc kubenswrapper[4930]: I1124 12:35:31.810651 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:36:01 crc kubenswrapper[4930]: I1124 12:36:01.808819 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:36:01 crc kubenswrapper[4930]: I1124 12:36:01.809391 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:36:01 crc kubenswrapper[4930]: I1124 12:36:01.809436 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:36:01 crc kubenswrapper[4930]: I1124 12:36:01.810390 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3c2378bf43082c6aae9cc114616a7eab51c58a092b8e698973e1d773ba4df0a"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:36:01 crc kubenswrapper[4930]: I1124 12:36:01.810450 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://e3c2378bf43082c6aae9cc114616a7eab51c58a092b8e698973e1d773ba4df0a" gracePeriod=600 Nov 24 12:36:02 crc kubenswrapper[4930]: I1124 12:36:02.887645 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="e3c2378bf43082c6aae9cc114616a7eab51c58a092b8e698973e1d773ba4df0a" exitCode=0 Nov 24 12:36:02 crc kubenswrapper[4930]: I1124 12:36:02.887737 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"e3c2378bf43082c6aae9cc114616a7eab51c58a092b8e698973e1d773ba4df0a"} Nov 24 12:36:02 crc kubenswrapper[4930]: I1124 12:36:02.888217 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b"} Nov 24 12:36:02 crc kubenswrapper[4930]: I1124 12:36:02.888245 4930 scope.go:117] "RemoveContainer" containerID="bf62d0759ee44a8ba0453c9e620537c08177b382e0923245a11aeb75c151087e" Nov 24 12:36:43 crc kubenswrapper[4930]: I1124 12:36:43.266857 4930 generic.go:334] "Generic (PLEG): container finished" podID="e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c" containerID="d0c7e4f1aeb2fc86ec857720acb4087d7a01573d0be6b6de36f3bea0609882ad" exitCode=0 Nov 24 12:36:43 crc kubenswrapper[4930]: I1124 12:36:43.266962 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" event={"ID":"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c","Type":"ContainerDied","Data":"d0c7e4f1aeb2fc86ec857720acb4087d7a01573d0be6b6de36f3bea0609882ad"} Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.746130 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.911305 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-inventory\") pod \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.911770 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-combined-ca-bundle\") pod \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.911917 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-secret-0\") pod \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.912045 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czp8w\" (UniqueName: \"kubernetes.io/projected/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-kube-api-access-czp8w\") pod \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.912124 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-ssh-key\") pod \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\" (UID: \"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c\") " Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.918569 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-kube-api-access-czp8w" (OuterVolumeSpecName: "kube-api-access-czp8w") pod "e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c" (UID: "e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c"). InnerVolumeSpecName "kube-api-access-czp8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.923790 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c" (UID: "e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.954781 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-inventory" (OuterVolumeSpecName: "inventory") pod "e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c" (UID: "e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.956836 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c" (UID: "e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:36:44 crc kubenswrapper[4930]: I1124 12:36:44.967290 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c" (UID: "e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.014854 4930 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.014906 4930 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.014926 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czp8w\" (UniqueName: \"kubernetes.io/projected/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-kube-api-access-czp8w\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.014945 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.014965 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.317964 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" event={"ID":"e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c","Type":"ContainerDied","Data":"dcf7b150122efc9d7ab36dd7276ffbaddf318e50d72a6d2b6c4cc01113f33c18"} Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.318347 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcf7b150122efc9d7ab36dd7276ffbaddf318e50d72a6d2b6c4cc01113f33c18" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.318041 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.391049 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9"] Nov 24 12:36:45 crc kubenswrapper[4930]: E1124 12:36:45.391459 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.391476 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.391678 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.392307 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.396205 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.397138 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.401723 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.402073 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.402090 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.402506 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.402576 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.407221 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9"] Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.424798 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.424866 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.424928 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.424997 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.425030 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrlw\" (UniqueName: \"kubernetes.io/projected/b5e86381-1bbe-4708-a86f-da5db51c1fb7-kube-api-access-kcrlw\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.425067 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.425215 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.425255 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.425286 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.526261 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.526312 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.526345 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.526379 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.526399 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcrlw\" (UniqueName: \"kubernetes.io/projected/b5e86381-1bbe-4708-a86f-da5db51c1fb7-kube-api-access-kcrlw\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.526425 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.526466 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.526488 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.526508 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.527174 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.530879 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.532032 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.532375 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.532729 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.534424 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.540860 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.542294 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.542887 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcrlw\" (UniqueName: \"kubernetes.io/projected/b5e86381-1bbe-4708-a86f-da5db51c1fb7-kube-api-access-kcrlw\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x7cw9\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:45 crc kubenswrapper[4930]: I1124 12:36:45.709675 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:36:46 crc kubenswrapper[4930]: I1124 12:36:46.276767 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9"] Nov 24 12:36:46 crc kubenswrapper[4930]: I1124 12:36:46.285651 4930 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:36:46 crc kubenswrapper[4930]: I1124 12:36:46.327554 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" event={"ID":"b5e86381-1bbe-4708-a86f-da5db51c1fb7","Type":"ContainerStarted","Data":"be5337b0bf663b76b7f0cfc75baa072c05472706e15933ec1e748ffbdbb269a2"} Nov 24 12:36:47 crc kubenswrapper[4930]: I1124 12:36:47.338914 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" event={"ID":"b5e86381-1bbe-4708-a86f-da5db51c1fb7","Type":"ContainerStarted","Data":"c71a7470c465c58beaff3fae92a30c462e57d234262d107988bae901b39bfe74"} Nov 24 12:36:47 crc kubenswrapper[4930]: I1124 12:36:47.362695 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" podStartSLOduration=1.874074247 podStartE2EDuration="2.36267004s" podCreationTimestamp="2025-11-24 12:36:45 +0000 UTC" firstStartedPulling="2025-11-24 12:36:46.285193313 +0000 UTC m=+2252.899521273" lastFinishedPulling="2025-11-24 12:36:46.773789116 +0000 UTC m=+2253.388117066" observedRunningTime="2025-11-24 12:36:47.356938494 +0000 UTC m=+2253.971266444" watchObservedRunningTime="2025-11-24 12:36:47.36267004 +0000 UTC m=+2253.976998030" Nov 24 12:38:31 crc kubenswrapper[4930]: I1124 12:38:31.809642 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:38:31 crc kubenswrapper[4930]: I1124 12:38:31.810044 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:39:01 crc kubenswrapper[4930]: I1124 12:39:01.809652 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:39:01 crc kubenswrapper[4930]: I1124 12:39:01.810378 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:39:31 crc kubenswrapper[4930]: I1124 12:39:31.809364 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:39:31 crc kubenswrapper[4930]: I1124 12:39:31.810196 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:39:31 crc kubenswrapper[4930]: I1124 12:39:31.810264 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:39:31 crc kubenswrapper[4930]: I1124 12:39:31.811486 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:39:31 crc kubenswrapper[4930]: I1124 12:39:31.853053 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" gracePeriod=600 Nov 24 12:39:31 crc kubenswrapper[4930]: E1124 12:39:31.975013 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:39:31 crc kubenswrapper[4930]: I1124 12:39:31.992039 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" exitCode=0 Nov 24 12:39:31 crc kubenswrapper[4930]: I1124 12:39:31.992085 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b"} Nov 24 12:39:31 crc kubenswrapper[4930]: I1124 12:39:31.992123 4930 scope.go:117] "RemoveContainer" containerID="e3c2378bf43082c6aae9cc114616a7eab51c58a092b8e698973e1d773ba4df0a" Nov 24 12:39:31 crc kubenswrapper[4930]: I1124 12:39:31.992825 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:39:31 crc kubenswrapper[4930]: E1124 12:39:31.993082 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:39:40 crc kubenswrapper[4930]: I1124 12:39:40.072860 4930 generic.go:334] "Generic (PLEG): container finished" podID="b5e86381-1bbe-4708-a86f-da5db51c1fb7" containerID="c71a7470c465c58beaff3fae92a30c462e57d234262d107988bae901b39bfe74" exitCode=0 Nov 24 12:39:40 crc kubenswrapper[4930]: I1124 12:39:40.072905 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" event={"ID":"b5e86381-1bbe-4708-a86f-da5db51c1fb7","Type":"ContainerDied","Data":"c71a7470c465c58beaff3fae92a30c462e57d234262d107988bae901b39bfe74"} Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.473281 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.553430 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-1\") pod \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.553476 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcrlw\" (UniqueName: \"kubernetes.io/projected/b5e86381-1bbe-4708-a86f-da5db51c1fb7-kube-api-access-kcrlw\") pod \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.553521 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-1\") pod \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.553572 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-ssh-key\") pod \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.553613 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-0\") pod \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.553660 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-inventory\") pod \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.553742 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-extra-config-0\") pod \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.553779 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-0\") pod \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.553812 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-combined-ca-bundle\") pod \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\" (UID: \"b5e86381-1bbe-4708-a86f-da5db51c1fb7\") " Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.560149 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e86381-1bbe-4708-a86f-da5db51c1fb7-kube-api-access-kcrlw" (OuterVolumeSpecName: "kube-api-access-kcrlw") pod "b5e86381-1bbe-4708-a86f-da5db51c1fb7" (UID: "b5e86381-1bbe-4708-a86f-da5db51c1fb7"). InnerVolumeSpecName "kube-api-access-kcrlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.565812 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b5e86381-1bbe-4708-a86f-da5db51c1fb7" (UID: "b5e86381-1bbe-4708-a86f-da5db51c1fb7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.588817 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b5e86381-1bbe-4708-a86f-da5db51c1fb7" (UID: "b5e86381-1bbe-4708-a86f-da5db51c1fb7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.588925 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b5e86381-1bbe-4708-a86f-da5db51c1fb7" (UID: "b5e86381-1bbe-4708-a86f-da5db51c1fb7"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.591919 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b5e86381-1bbe-4708-a86f-da5db51c1fb7" (UID: "b5e86381-1bbe-4708-a86f-da5db51c1fb7"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.592506 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-inventory" (OuterVolumeSpecName: "inventory") pod "b5e86381-1bbe-4708-a86f-da5db51c1fb7" (UID: "b5e86381-1bbe-4708-a86f-da5db51c1fb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.599006 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b5e86381-1bbe-4708-a86f-da5db51c1fb7" (UID: "b5e86381-1bbe-4708-a86f-da5db51c1fb7"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.599084 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b5e86381-1bbe-4708-a86f-da5db51c1fb7" (UID: "b5e86381-1bbe-4708-a86f-da5db51c1fb7"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.611406 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b5e86381-1bbe-4708-a86f-da5db51c1fb7" (UID: "b5e86381-1bbe-4708-a86f-da5db51c1fb7"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.656102 4930 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.656145 4930 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.656161 4930 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.656173 4930 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.656185 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcrlw\" (UniqueName: \"kubernetes.io/projected/b5e86381-1bbe-4708-a86f-da5db51c1fb7-kube-api-access-kcrlw\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.656196 4930 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.656206 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.656217 4930 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:41 crc kubenswrapper[4930]: I1124 12:39:41.656227 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5e86381-1bbe-4708-a86f-da5db51c1fb7-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.093272 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.096533 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x7cw9" event={"ID":"b5e86381-1bbe-4708-a86f-da5db51c1fb7","Type":"ContainerDied","Data":"be5337b0bf663b76b7f0cfc75baa072c05472706e15933ec1e748ffbdbb269a2"} Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.096584 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be5337b0bf663b76b7f0cfc75baa072c05472706e15933ec1e748ffbdbb269a2" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.201669 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6"] Nov 24 12:39:42 crc kubenswrapper[4930]: E1124 12:39:42.202269 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e86381-1bbe-4708-a86f-da5db51c1fb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.202292 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e86381-1bbe-4708-a86f-da5db51c1fb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.202649 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e86381-1bbe-4708-a86f-da5db51c1fb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.203447 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.206991 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.207295 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.207459 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.207742 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hx5b5" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.208006 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.211437 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6"] Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.266125 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.266173 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.266225 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.266258 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.266322 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4lkc\" (UniqueName: \"kubernetes.io/projected/e5f020e4-dece-42e7-b327-99797d3b447f-kube-api-access-l4lkc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.266375 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.266399 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.367749 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.367860 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4lkc\" (UniqueName: \"kubernetes.io/projected/e5f020e4-dece-42e7-b327-99797d3b447f-kube-api-access-l4lkc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.367920 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.367943 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.368040 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.368067 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.368112 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.371289 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.371506 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.371756 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.372907 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.373356 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.379924 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.383941 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4lkc\" (UniqueName: \"kubernetes.io/projected/e5f020e4-dece-42e7-b327-99797d3b447f-kube-api-access-l4lkc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:42 crc kubenswrapper[4930]: I1124 12:39:42.526304 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:39:43 crc kubenswrapper[4930]: I1124 12:39:43.106726 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6"] Nov 24 12:39:44 crc kubenswrapper[4930]: I1124 12:39:44.126921 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" event={"ID":"e5f020e4-dece-42e7-b327-99797d3b447f","Type":"ContainerStarted","Data":"c5fd8906be0f646f65e8325af4e1247fdad2bff217699fe1fe79bddbad5130bb"} Nov 24 12:39:44 crc kubenswrapper[4930]: I1124 12:39:44.127313 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" event={"ID":"e5f020e4-dece-42e7-b327-99797d3b447f","Type":"ContainerStarted","Data":"4dd2f1cfe43c4e641eba844ea842e1a0725af463e3edb3eb5917c8c1aa522f51"} Nov 24 12:39:44 crc kubenswrapper[4930]: I1124 12:39:44.151588 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" podStartSLOduration=1.7385202610000001 podStartE2EDuration="2.151568047s" podCreationTimestamp="2025-11-24 12:39:42 +0000 UTC" firstStartedPulling="2025-11-24 12:39:43.114366971 +0000 UTC m=+2429.728694921" lastFinishedPulling="2025-11-24 12:39:43.527414757 +0000 UTC m=+2430.141742707" observedRunningTime="2025-11-24 12:39:44.146651826 +0000 UTC m=+2430.760979796" watchObservedRunningTime="2025-11-24 12:39:44.151568047 +0000 UTC m=+2430.765895997" Nov 24 12:39:46 crc kubenswrapper[4930]: I1124 12:39:46.084462 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:39:46 crc kubenswrapper[4930]: E1124 12:39:46.085073 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:40:01 crc kubenswrapper[4930]: I1124 12:40:01.085010 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:40:01 crc kubenswrapper[4930]: E1124 12:40:01.085782 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:40:15 crc kubenswrapper[4930]: I1124 12:40:15.084884 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:40:15 crc kubenswrapper[4930]: E1124 12:40:15.085951 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:40:26 crc kubenswrapper[4930]: I1124 12:40:26.084478 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:40:26 crc kubenswrapper[4930]: E1124 12:40:26.085733 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:40:39 crc kubenswrapper[4930]: I1124 12:40:39.084831 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:40:39 crc kubenswrapper[4930]: E1124 12:40:39.085683 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:40:52 crc kubenswrapper[4930]: I1124 12:40:52.085309 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:40:52 crc kubenswrapper[4930]: E1124 12:40:52.086043 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:41:05 crc kubenswrapper[4930]: I1124 12:41:05.084364 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:41:05 crc kubenswrapper[4930]: E1124 12:41:05.085288 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:41:09 crc kubenswrapper[4930]: I1124 12:41:09.805645 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6q4sb"] Nov 24 12:41:09 crc kubenswrapper[4930]: I1124 12:41:09.808346 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:09 crc kubenswrapper[4930]: I1124 12:41:09.831086 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6q4sb"] Nov 24 12:41:09 crc kubenswrapper[4930]: I1124 12:41:09.917793 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-catalog-content\") pod \"redhat-operators-6q4sb\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:09 crc kubenswrapper[4930]: I1124 12:41:09.917870 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcbnn\" (UniqueName: \"kubernetes.io/projected/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-kube-api-access-qcbnn\") pod \"redhat-operators-6q4sb\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:09 crc kubenswrapper[4930]: I1124 12:41:09.918319 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-utilities\") pod \"redhat-operators-6q4sb\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:10 crc kubenswrapper[4930]: I1124 12:41:10.020722 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-catalog-content\") pod \"redhat-operators-6q4sb\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:10 crc kubenswrapper[4930]: I1124 12:41:10.020789 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcbnn\" (UniqueName: \"kubernetes.io/projected/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-kube-api-access-qcbnn\") pod \"redhat-operators-6q4sb\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:10 crc kubenswrapper[4930]: I1124 12:41:10.020954 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-utilities\") pod \"redhat-operators-6q4sb\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:10 crc kubenswrapper[4930]: I1124 12:41:10.021719 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-utilities\") pod \"redhat-operators-6q4sb\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:10 crc kubenswrapper[4930]: I1124 12:41:10.021727 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-catalog-content\") pod \"redhat-operators-6q4sb\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:10 crc kubenswrapper[4930]: I1124 12:41:10.059574 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcbnn\" (UniqueName: \"kubernetes.io/projected/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-kube-api-access-qcbnn\") pod \"redhat-operators-6q4sb\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:10 crc kubenswrapper[4930]: I1124 12:41:10.146646 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:10 crc kubenswrapper[4930]: I1124 12:41:10.606213 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6q4sb"] Nov 24 12:41:11 crc kubenswrapper[4930]: I1124 12:41:11.292498 4930 generic.go:334] "Generic (PLEG): container finished" podID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerID="ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96" exitCode=0 Nov 24 12:41:11 crc kubenswrapper[4930]: I1124 12:41:11.292590 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6q4sb" event={"ID":"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384","Type":"ContainerDied","Data":"ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96"} Nov 24 12:41:11 crc kubenswrapper[4930]: I1124 12:41:11.292979 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6q4sb" event={"ID":"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384","Type":"ContainerStarted","Data":"16e24b56f40c68ae58e70b77c217a657a034d86fedb5b86dde4a15589a4308d3"} Nov 24 12:41:12 crc kubenswrapper[4930]: I1124 12:41:12.303332 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6q4sb" event={"ID":"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384","Type":"ContainerStarted","Data":"6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a"} Nov 24 12:41:13 crc kubenswrapper[4930]: I1124 12:41:13.315394 4930 generic.go:334] "Generic (PLEG): container finished" podID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerID="6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a" exitCode=0 Nov 24 12:41:13 crc kubenswrapper[4930]: I1124 12:41:13.315435 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6q4sb" event={"ID":"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384","Type":"ContainerDied","Data":"6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a"} Nov 24 12:41:14 crc kubenswrapper[4930]: I1124 12:41:14.324847 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6q4sb" event={"ID":"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384","Type":"ContainerStarted","Data":"3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d"} Nov 24 12:41:14 crc kubenswrapper[4930]: I1124 12:41:14.340906 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6q4sb" podStartSLOduration=2.899432749 podStartE2EDuration="5.34089182s" podCreationTimestamp="2025-11-24 12:41:09 +0000 UTC" firstStartedPulling="2025-11-24 12:41:11.2969081 +0000 UTC m=+2517.911236090" lastFinishedPulling="2025-11-24 12:41:13.738367211 +0000 UTC m=+2520.352695161" observedRunningTime="2025-11-24 12:41:14.338862771 +0000 UTC m=+2520.953190731" watchObservedRunningTime="2025-11-24 12:41:14.34089182 +0000 UTC m=+2520.955219770" Nov 24 12:41:16 crc kubenswrapper[4930]: I1124 12:41:16.085385 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:41:16 crc kubenswrapper[4930]: E1124 12:41:16.086113 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:41:20 crc kubenswrapper[4930]: I1124 12:41:20.147433 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:20 crc kubenswrapper[4930]: I1124 12:41:20.147924 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:20 crc kubenswrapper[4930]: I1124 12:41:20.210737 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:20 crc kubenswrapper[4930]: I1124 12:41:20.452822 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:20 crc kubenswrapper[4930]: I1124 12:41:20.504759 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6q4sb"] Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.407842 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6q4sb" podUID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerName="registry-server" containerID="cri-o://3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d" gracePeriod=2 Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.852558 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.879468 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-utilities\") pod \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.879555 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-catalog-content\") pod \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.879645 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcbnn\" (UniqueName: \"kubernetes.io/projected/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-kube-api-access-qcbnn\") pod \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.881768 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-utilities" (OuterVolumeSpecName: "utilities") pod "761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" (UID: "761bb2a2-29d2-4b4c-8b7c-1e611bcbc384"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.886692 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-kube-api-access-qcbnn" (OuterVolumeSpecName: "kube-api-access-qcbnn") pod "761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" (UID: "761bb2a2-29d2-4b4c-8b7c-1e611bcbc384"). InnerVolumeSpecName "kube-api-access-qcbnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.980233 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" (UID: "761bb2a2-29d2-4b4c-8b7c-1e611bcbc384"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.981146 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-catalog-content\") pod \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\" (UID: \"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384\") " Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.981565 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.981583 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcbnn\" (UniqueName: \"kubernetes.io/projected/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-kube-api-access-qcbnn\") on node \"crc\" DevicePath \"\"" Nov 24 12:41:22 crc kubenswrapper[4930]: W1124 12:41:22.981642 4930 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384/volumes/kubernetes.io~empty-dir/catalog-content Nov 24 12:41:22 crc kubenswrapper[4930]: I1124 12:41:22.981649 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" (UID: "761bb2a2-29d2-4b4c-8b7c-1e611bcbc384"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.083845 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.422804 4930 generic.go:334] "Generic (PLEG): container finished" podID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerID="3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d" exitCode=0 Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.422856 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6q4sb" event={"ID":"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384","Type":"ContainerDied","Data":"3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d"} Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.422884 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6q4sb" event={"ID":"761bb2a2-29d2-4b4c-8b7c-1e611bcbc384","Type":"ContainerDied","Data":"16e24b56f40c68ae58e70b77c217a657a034d86fedb5b86dde4a15589a4308d3"} Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.422903 4930 scope.go:117] "RemoveContainer" containerID="3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.423344 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6q4sb" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.464456 4930 scope.go:117] "RemoveContainer" containerID="6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.467738 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6q4sb"] Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.474935 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6q4sb"] Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.493630 4930 scope.go:117] "RemoveContainer" containerID="ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.537199 4930 scope.go:117] "RemoveContainer" containerID="3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d" Nov 24 12:41:23 crc kubenswrapper[4930]: E1124 12:41:23.537856 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d\": container with ID starting with 3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d not found: ID does not exist" containerID="3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.537929 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d"} err="failed to get container status \"3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d\": rpc error: code = NotFound desc = could not find container \"3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d\": container with ID starting with 3aae4735b4d1556828805ab8cbcd40bfd41aac39c7bbb1485abef49d574cde4d not found: ID does not exist" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.537973 4930 scope.go:117] "RemoveContainer" containerID="6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a" Nov 24 12:41:23 crc kubenswrapper[4930]: E1124 12:41:23.538447 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a\": container with ID starting with 6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a not found: ID does not exist" containerID="6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.538483 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a"} err="failed to get container status \"6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a\": rpc error: code = NotFound desc = could not find container \"6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a\": container with ID starting with 6a86ed6579fb21c1a8e502325c2ae9df7d40e22e41c4acfa31002a6a714e0b5a not found: ID does not exist" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.538511 4930 scope.go:117] "RemoveContainer" containerID="ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96" Nov 24 12:41:23 crc kubenswrapper[4930]: E1124 12:41:23.538882 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96\": container with ID starting with ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96 not found: ID does not exist" containerID="ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96" Nov 24 12:41:23 crc kubenswrapper[4930]: I1124 12:41:23.538952 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96"} err="failed to get container status \"ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96\": rpc error: code = NotFound desc = could not find container \"ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96\": container with ID starting with ad240e77505983983b11a3b922f17ac22f2b0c90ec4469081a297c071dd48e96 not found: ID does not exist" Nov 24 12:41:24 crc kubenswrapper[4930]: I1124 12:41:24.100246 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" path="/var/lib/kubelet/pods/761bb2a2-29d2-4b4c-8b7c-1e611bcbc384/volumes" Nov 24 12:41:28 crc kubenswrapper[4930]: I1124 12:41:28.084594 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:41:28 crc kubenswrapper[4930]: E1124 12:41:28.085146 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:41:39 crc kubenswrapper[4930]: I1124 12:41:39.085033 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:41:39 crc kubenswrapper[4930]: E1124 12:41:39.086203 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:41:53 crc kubenswrapper[4930]: I1124 12:41:53.084894 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:41:53 crc kubenswrapper[4930]: E1124 12:41:53.085929 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.136765 4930 generic.go:334] "Generic (PLEG): container finished" podID="e5f020e4-dece-42e7-b327-99797d3b447f" containerID="c5fd8906be0f646f65e8325af4e1247fdad2bff217699fe1fe79bddbad5130bb" exitCode=0 Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.136871 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" event={"ID":"e5f020e4-dece-42e7-b327-99797d3b447f","Type":"ContainerDied","Data":"c5fd8906be0f646f65e8325af4e1247fdad2bff217699fe1fe79bddbad5130bb"} Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.460350 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4zf64"] Nov 24 12:42:00 crc kubenswrapper[4930]: E1124 12:42:00.460735 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerName="extract-content" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.460747 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerName="extract-content" Nov 24 12:42:00 crc kubenswrapper[4930]: E1124 12:42:00.460764 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerName="extract-utilities" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.460770 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerName="extract-utilities" Nov 24 12:42:00 crc kubenswrapper[4930]: E1124 12:42:00.460808 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerName="registry-server" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.460815 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerName="registry-server" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.460998 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="761bb2a2-29d2-4b4c-8b7c-1e611bcbc384" containerName="registry-server" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.462358 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.479951 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zf64"] Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.567457 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-catalog-content\") pod \"certified-operators-4zf64\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.567511 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-utilities\") pod \"certified-operators-4zf64\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.567623 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxm7\" (UniqueName: \"kubernetes.io/projected/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-kube-api-access-prxm7\") pod \"certified-operators-4zf64\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.669094 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxm7\" (UniqueName: \"kubernetes.io/projected/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-kube-api-access-prxm7\") pod \"certified-operators-4zf64\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.669263 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-catalog-content\") pod \"certified-operators-4zf64\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.669287 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-utilities\") pod \"certified-operators-4zf64\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.669804 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-utilities\") pod \"certified-operators-4zf64\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.669955 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-catalog-content\") pod \"certified-operators-4zf64\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.694296 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxm7\" (UniqueName: \"kubernetes.io/projected/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-kube-api-access-prxm7\") pod \"certified-operators-4zf64\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:00 crc kubenswrapper[4930]: I1124 12:42:00.807576 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.408336 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zf64"] Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.586949 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.597153 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-0\") pod \"e5f020e4-dece-42e7-b327-99797d3b447f\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.597229 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-inventory\") pod \"e5f020e4-dece-42e7-b327-99797d3b447f\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.597279 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-1\") pod \"e5f020e4-dece-42e7-b327-99797d3b447f\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.597296 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4lkc\" (UniqueName: \"kubernetes.io/projected/e5f020e4-dece-42e7-b327-99797d3b447f-kube-api-access-l4lkc\") pod \"e5f020e4-dece-42e7-b327-99797d3b447f\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.597354 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-telemetry-combined-ca-bundle\") pod \"e5f020e4-dece-42e7-b327-99797d3b447f\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.597420 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ssh-key\") pod \"e5f020e4-dece-42e7-b327-99797d3b447f\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.597473 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-2\") pod \"e5f020e4-dece-42e7-b327-99797d3b447f\" (UID: \"e5f020e4-dece-42e7-b327-99797d3b447f\") " Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.608291 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f020e4-dece-42e7-b327-99797d3b447f-kube-api-access-l4lkc" (OuterVolumeSpecName: "kube-api-access-l4lkc") pod "e5f020e4-dece-42e7-b327-99797d3b447f" (UID: "e5f020e4-dece-42e7-b327-99797d3b447f"). InnerVolumeSpecName "kube-api-access-l4lkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.631822 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e5f020e4-dece-42e7-b327-99797d3b447f" (UID: "e5f020e4-dece-42e7-b327-99797d3b447f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.660821 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-inventory" (OuterVolumeSpecName: "inventory") pod "e5f020e4-dece-42e7-b327-99797d3b447f" (UID: "e5f020e4-dece-42e7-b327-99797d3b447f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.667674 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e5f020e4-dece-42e7-b327-99797d3b447f" (UID: "e5f020e4-dece-42e7-b327-99797d3b447f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.669000 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e5f020e4-dece-42e7-b327-99797d3b447f" (UID: "e5f020e4-dece-42e7-b327-99797d3b447f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.678015 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e5f020e4-dece-42e7-b327-99797d3b447f" (UID: "e5f020e4-dece-42e7-b327-99797d3b447f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.700669 4930 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.700698 4930 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.700710 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4lkc\" (UniqueName: \"kubernetes.io/projected/e5f020e4-dece-42e7-b327-99797d3b447f-kube-api-access-l4lkc\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.700721 4930 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.700728 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.700737 4930 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.704622 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e5f020e4-dece-42e7-b327-99797d3b447f" (UID: "e5f020e4-dece-42e7-b327-99797d3b447f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:42:01 crc kubenswrapper[4930]: I1124 12:42:01.802669 4930 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e5f020e4-dece-42e7-b327-99797d3b447f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:02 crc kubenswrapper[4930]: I1124 12:42:02.162438 4930 generic.go:334] "Generic (PLEG): container finished" podID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerID="d946f4e2a32d1f2e76b5220961d3f1959dfb6a7ea293d0d113fe7809dbae345a" exitCode=0 Nov 24 12:42:02 crc kubenswrapper[4930]: I1124 12:42:02.162792 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zf64" event={"ID":"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36","Type":"ContainerDied","Data":"d946f4e2a32d1f2e76b5220961d3f1959dfb6a7ea293d0d113fe7809dbae345a"} Nov 24 12:42:02 crc kubenswrapper[4930]: I1124 12:42:02.162817 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zf64" event={"ID":"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36","Type":"ContainerStarted","Data":"2a6cf711d3463f905a8a2094933542277703108ce60bd82e638830623e7a6403"} Nov 24 12:42:02 crc kubenswrapper[4930]: I1124 12:42:02.166073 4930 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:42:02 crc kubenswrapper[4930]: I1124 12:42:02.170426 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" event={"ID":"e5f020e4-dece-42e7-b327-99797d3b447f","Type":"ContainerDied","Data":"4dd2f1cfe43c4e641eba844ea842e1a0725af463e3edb3eb5917c8c1aa522f51"} Nov 24 12:42:02 crc kubenswrapper[4930]: I1124 12:42:02.170482 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd2f1cfe43c4e641eba844ea842e1a0725af463e3edb3eb5917c8c1aa522f51" Nov 24 12:42:02 crc kubenswrapper[4930]: I1124 12:42:02.170591 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6" Nov 24 12:42:03 crc kubenswrapper[4930]: I1124 12:42:03.188753 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zf64" event={"ID":"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36","Type":"ContainerStarted","Data":"1dcb47ce131c7fe4f7e30976413565510b8f660f63deac42098c3072438040ed"} Nov 24 12:42:04 crc kubenswrapper[4930]: I1124 12:42:04.205055 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zf64" event={"ID":"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36","Type":"ContainerDied","Data":"1dcb47ce131c7fe4f7e30976413565510b8f660f63deac42098c3072438040ed"} Nov 24 12:42:04 crc kubenswrapper[4930]: I1124 12:42:04.204986 4930 generic.go:334] "Generic (PLEG): container finished" podID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerID="1dcb47ce131c7fe4f7e30976413565510b8f660f63deac42098c3072438040ed" exitCode=0 Nov 24 12:42:05 crc kubenswrapper[4930]: I1124 12:42:05.084179 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:42:05 crc kubenswrapper[4930]: E1124 12:42:05.086822 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:42:05 crc kubenswrapper[4930]: I1124 12:42:05.225349 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zf64" event={"ID":"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36","Type":"ContainerStarted","Data":"3cf63bde00cb7929671ba21cbdb6b09db17ce5be84fd59f8715988bbc5b1ed29"} Nov 24 12:42:05 crc kubenswrapper[4930]: I1124 12:42:05.250503 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4zf64" podStartSLOduration=2.720604049 podStartE2EDuration="5.250481049s" podCreationTimestamp="2025-11-24 12:42:00 +0000 UTC" firstStartedPulling="2025-11-24 12:42:02.165844008 +0000 UTC m=+2568.780171958" lastFinishedPulling="2025-11-24 12:42:04.695720988 +0000 UTC m=+2571.310048958" observedRunningTime="2025-11-24 12:42:05.242833897 +0000 UTC m=+2571.857161847" watchObservedRunningTime="2025-11-24 12:42:05.250481049 +0000 UTC m=+2571.864808999" Nov 24 12:42:10 crc kubenswrapper[4930]: I1124 12:42:10.807987 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:10 crc kubenswrapper[4930]: I1124 12:42:10.808515 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:10 crc kubenswrapper[4930]: I1124 12:42:10.850856 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:11 crc kubenswrapper[4930]: I1124 12:42:11.347013 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:11 crc kubenswrapper[4930]: I1124 12:42:11.399828 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zf64"] Nov 24 12:42:13 crc kubenswrapper[4930]: I1124 12:42:13.304174 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4zf64" podUID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerName="registry-server" containerID="cri-o://3cf63bde00cb7929671ba21cbdb6b09db17ce5be84fd59f8715988bbc5b1ed29" gracePeriod=2 Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.322248 4930 generic.go:334] "Generic (PLEG): container finished" podID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerID="3cf63bde00cb7929671ba21cbdb6b09db17ce5be84fd59f8715988bbc5b1ed29" exitCode=0 Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.322317 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zf64" event={"ID":"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36","Type":"ContainerDied","Data":"3cf63bde00cb7929671ba21cbdb6b09db17ce5be84fd59f8715988bbc5b1ed29"} Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.322630 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zf64" event={"ID":"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36","Type":"ContainerDied","Data":"2a6cf711d3463f905a8a2094933542277703108ce60bd82e638830623e7a6403"} Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.322658 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a6cf711d3463f905a8a2094933542277703108ce60bd82e638830623e7a6403" Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.367815 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.473755 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-catalog-content\") pod \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.473838 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prxm7\" (UniqueName: \"kubernetes.io/projected/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-kube-api-access-prxm7\") pod \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.473909 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-utilities\") pod \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\" (UID: \"f45f5dd7-14ae-45ac-8bbb-ccf58006bc36\") " Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.476141 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-utilities" (OuterVolumeSpecName: "utilities") pod "f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" (UID: "f45f5dd7-14ae-45ac-8bbb-ccf58006bc36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.481421 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-kube-api-access-prxm7" (OuterVolumeSpecName: "kube-api-access-prxm7") pod "f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" (UID: "f45f5dd7-14ae-45ac-8bbb-ccf58006bc36"). InnerVolumeSpecName "kube-api-access-prxm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.539821 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" (UID: "f45f5dd7-14ae-45ac-8bbb-ccf58006bc36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.576458 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.576495 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prxm7\" (UniqueName: \"kubernetes.io/projected/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-kube-api-access-prxm7\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:14 crc kubenswrapper[4930]: I1124 12:42:14.576509 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:15 crc kubenswrapper[4930]: I1124 12:42:15.331573 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zf64" Nov 24 12:42:15 crc kubenswrapper[4930]: I1124 12:42:15.375362 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zf64"] Nov 24 12:42:15 crc kubenswrapper[4930]: I1124 12:42:15.394829 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4zf64"] Nov 24 12:42:16 crc kubenswrapper[4930]: I1124 12:42:16.106803 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" path="/var/lib/kubelet/pods/f45f5dd7-14ae-45ac-8bbb-ccf58006bc36/volumes" Nov 24 12:42:17 crc kubenswrapper[4930]: I1124 12:42:17.085201 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:42:17 crc kubenswrapper[4930]: E1124 12:42:17.085850 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:42:32 crc kubenswrapper[4930]: I1124 12:42:32.085165 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:42:32 crc kubenswrapper[4930]: E1124 12:42:32.088368 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:42:43 crc kubenswrapper[4930]: I1124 12:42:43.084597 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:42:43 crc kubenswrapper[4930]: E1124 12:42:43.085322 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.139497 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:42:54 crc kubenswrapper[4930]: E1124 12:42:54.140331 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerName="extract-utilities" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.140344 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerName="extract-utilities" Nov 24 12:42:54 crc kubenswrapper[4930]: E1124 12:42:54.140367 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerName="registry-server" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.140373 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerName="registry-server" Nov 24 12:42:54 crc kubenswrapper[4930]: E1124 12:42:54.140393 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f020e4-dece-42e7-b327-99797d3b447f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.140400 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f020e4-dece-42e7-b327-99797d3b447f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 12:42:54 crc kubenswrapper[4930]: E1124 12:42:54.140413 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerName="extract-content" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.140419 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerName="extract-content" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.140907 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f020e4-dece-42e7-b327-99797d3b447f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.140925 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45f5dd7-14ae-45ac-8bbb-ccf58006bc36" containerName="registry-server" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.141504 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.148371 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.149020 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-cs6lk" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.149365 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.154023 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.166739 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.312368 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.312436 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.312491 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.312522 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.312681 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-config-data\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.312756 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.312878 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.313031 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.313163 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2wg7\" (UniqueName: \"kubernetes.io/projected/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-kube-api-access-n2wg7\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.415023 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.415148 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.415467 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2wg7\" (UniqueName: \"kubernetes.io/projected/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-kube-api-access-n2wg7\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.415499 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.415605 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.415681 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.415730 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.415797 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-config-data\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.415842 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.416305 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.416641 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.417007 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-config-data\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.417513 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.418193 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.422680 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.422780 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.423261 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.439334 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2wg7\" (UniqueName: \"kubernetes.io/projected/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-kube-api-access-n2wg7\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.459004 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.483090 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:42:54 crc kubenswrapper[4930]: I1124 12:42:54.947267 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:42:55 crc kubenswrapper[4930]: I1124 12:42:55.729724 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6a7fbabe-a7e2-469c-b6aa-22973dd510b3","Type":"ContainerStarted","Data":"08ce9626187542031b663ee08261a3c561feab81796989f8f794a6536b412e2f"} Nov 24 12:42:57 crc kubenswrapper[4930]: I1124 12:42:57.084528 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:42:57 crc kubenswrapper[4930]: E1124 12:42:57.084808 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:43:10 crc kubenswrapper[4930]: I1124 12:43:10.085449 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:43:10 crc kubenswrapper[4930]: E1124 12:43:10.086212 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:43:21 crc kubenswrapper[4930]: E1124 12:43:21.917636 4930 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 24 12:43:21 crc kubenswrapper[4930]: E1124 12:43:21.918969 4930 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2wg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(6a7fbabe-a7e2-469c-b6aa-22973dd510b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:43:21 crc kubenswrapper[4930]: E1124 12:43:21.920168 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="6a7fbabe-a7e2-469c-b6aa-22973dd510b3" Nov 24 12:43:22 crc kubenswrapper[4930]: E1124 12:43:22.018313 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="6a7fbabe-a7e2-469c-b6aa-22973dd510b3" Nov 24 12:43:25 crc kubenswrapper[4930]: I1124 12:43:25.086071 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:43:25 crc kubenswrapper[4930]: E1124 12:43:25.087512 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:43:36 crc kubenswrapper[4930]: I1124 12:43:36.588611 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 12:43:39 crc kubenswrapper[4930]: I1124 12:43:39.085349 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:43:39 crc kubenswrapper[4930]: E1124 12:43:39.085935 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:43:39 crc kubenswrapper[4930]: I1124 12:43:39.187046 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6a7fbabe-a7e2-469c-b6aa-22973dd510b3","Type":"ContainerStarted","Data":"cf07a45ccc1f1ca2f2ee2d8ed45d05b2c2bcd930b07ea9515b9f4996fcc611c1"} Nov 24 12:43:39 crc kubenswrapper[4930]: I1124 12:43:39.210938 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.583378183 podStartE2EDuration="46.210921543s" podCreationTimestamp="2025-11-24 12:42:53 +0000 UTC" firstStartedPulling="2025-11-24 12:42:54.950873053 +0000 UTC m=+2621.565201003" lastFinishedPulling="2025-11-24 12:43:36.578416413 +0000 UTC m=+2663.192744363" observedRunningTime="2025-11-24 12:43:39.207843426 +0000 UTC m=+2665.822171386" watchObservedRunningTime="2025-11-24 12:43:39.210921543 +0000 UTC m=+2665.825249493" Nov 24 12:43:54 crc kubenswrapper[4930]: I1124 12:43:54.104280 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:43:54 crc kubenswrapper[4930]: E1124 12:43:54.105046 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:44:09 crc kubenswrapper[4930]: I1124 12:44:09.084781 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:44:09 crc kubenswrapper[4930]: E1124 12:44:09.085957 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:44:20 crc kubenswrapper[4930]: I1124 12:44:20.088816 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:44:20 crc kubenswrapper[4930]: E1124 12:44:20.089836 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:44:32 crc kubenswrapper[4930]: I1124 12:44:32.085007 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:44:32 crc kubenswrapper[4930]: I1124 12:44:32.712679 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"3d8009f0d50af8bcd32af8108bb1f6f40bc204c198cc350825e7feae500f7e1e"} Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.158432 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn"] Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.160172 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.163407 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.163551 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.177212 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn"] Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.327724 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2nnx\" (UniqueName: \"kubernetes.io/projected/018a5555-8caf-406b-b858-5cd36d325445-kube-api-access-k2nnx\") pod \"collect-profiles-29399805-4w7fn\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.327917 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/018a5555-8caf-406b-b858-5cd36d325445-config-volume\") pod \"collect-profiles-29399805-4w7fn\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.328008 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/018a5555-8caf-406b-b858-5cd36d325445-secret-volume\") pod \"collect-profiles-29399805-4w7fn\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.430434 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/018a5555-8caf-406b-b858-5cd36d325445-config-volume\") pod \"collect-profiles-29399805-4w7fn\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.430500 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/018a5555-8caf-406b-b858-5cd36d325445-secret-volume\") pod \"collect-profiles-29399805-4w7fn\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.430640 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2nnx\" (UniqueName: \"kubernetes.io/projected/018a5555-8caf-406b-b858-5cd36d325445-kube-api-access-k2nnx\") pod \"collect-profiles-29399805-4w7fn\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.431191 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/018a5555-8caf-406b-b858-5cd36d325445-config-volume\") pod \"collect-profiles-29399805-4w7fn\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.440956 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/018a5555-8caf-406b-b858-5cd36d325445-secret-volume\") pod \"collect-profiles-29399805-4w7fn\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.450565 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2nnx\" (UniqueName: \"kubernetes.io/projected/018a5555-8caf-406b-b858-5cd36d325445-kube-api-access-k2nnx\") pod \"collect-profiles-29399805-4w7fn\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.525182 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:00 crc kubenswrapper[4930]: I1124 12:45:00.997841 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn"] Nov 24 12:45:01 crc kubenswrapper[4930]: I1124 12:45:01.982302 4930 generic.go:334] "Generic (PLEG): container finished" podID="018a5555-8caf-406b-b858-5cd36d325445" containerID="d7b51f85594ae33573e52a68456cde4ff905cc5e457e1659d68f88dcf38cd7a3" exitCode=0 Nov 24 12:45:01 crc kubenswrapper[4930]: I1124 12:45:01.982383 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" event={"ID":"018a5555-8caf-406b-b858-5cd36d325445","Type":"ContainerDied","Data":"d7b51f85594ae33573e52a68456cde4ff905cc5e457e1659d68f88dcf38cd7a3"} Nov 24 12:45:01 crc kubenswrapper[4930]: I1124 12:45:01.982969 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" event={"ID":"018a5555-8caf-406b-b858-5cd36d325445","Type":"ContainerStarted","Data":"649eaa353974c5dd8945e2e96edf5d87d1a4b0aa627020c59789e91766ad9375"} Nov 24 12:45:03 crc kubenswrapper[4930]: I1124 12:45:03.369635 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:03 crc kubenswrapper[4930]: I1124 12:45:03.384898 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/018a5555-8caf-406b-b858-5cd36d325445-secret-volume\") pod \"018a5555-8caf-406b-b858-5cd36d325445\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " Nov 24 12:45:03 crc kubenswrapper[4930]: I1124 12:45:03.384959 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2nnx\" (UniqueName: \"kubernetes.io/projected/018a5555-8caf-406b-b858-5cd36d325445-kube-api-access-k2nnx\") pod \"018a5555-8caf-406b-b858-5cd36d325445\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " Nov 24 12:45:03 crc kubenswrapper[4930]: I1124 12:45:03.385047 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/018a5555-8caf-406b-b858-5cd36d325445-config-volume\") pod \"018a5555-8caf-406b-b858-5cd36d325445\" (UID: \"018a5555-8caf-406b-b858-5cd36d325445\") " Nov 24 12:45:03 crc kubenswrapper[4930]: I1124 12:45:03.385640 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018a5555-8caf-406b-b858-5cd36d325445-config-volume" (OuterVolumeSpecName: "config-volume") pod "018a5555-8caf-406b-b858-5cd36d325445" (UID: "018a5555-8caf-406b-b858-5cd36d325445"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:03 crc kubenswrapper[4930]: I1124 12:45:03.409238 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018a5555-8caf-406b-b858-5cd36d325445-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "018a5555-8caf-406b-b858-5cd36d325445" (UID: "018a5555-8caf-406b-b858-5cd36d325445"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:03 crc kubenswrapper[4930]: I1124 12:45:03.422882 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018a5555-8caf-406b-b858-5cd36d325445-kube-api-access-k2nnx" (OuterVolumeSpecName: "kube-api-access-k2nnx") pod "018a5555-8caf-406b-b858-5cd36d325445" (UID: "018a5555-8caf-406b-b858-5cd36d325445"). InnerVolumeSpecName "kube-api-access-k2nnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:03 crc kubenswrapper[4930]: I1124 12:45:03.487184 4930 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/018a5555-8caf-406b-b858-5cd36d325445-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:03 crc kubenswrapper[4930]: I1124 12:45:03.487229 4930 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/018a5555-8caf-406b-b858-5cd36d325445-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:03 crc kubenswrapper[4930]: I1124 12:45:03.487239 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2nnx\" (UniqueName: \"kubernetes.io/projected/018a5555-8caf-406b-b858-5cd36d325445-kube-api-access-k2nnx\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:04 crc kubenswrapper[4930]: I1124 12:45:04.002335 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" event={"ID":"018a5555-8caf-406b-b858-5cd36d325445","Type":"ContainerDied","Data":"649eaa353974c5dd8945e2e96edf5d87d1a4b0aa627020c59789e91766ad9375"} Nov 24 12:45:04 crc kubenswrapper[4930]: I1124 12:45:04.002658 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="649eaa353974c5dd8945e2e96edf5d87d1a4b0aa627020c59789e91766ad9375" Nov 24 12:45:04 crc kubenswrapper[4930]: I1124 12:45:04.002569 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-4w7fn" Nov 24 12:45:04 crc kubenswrapper[4930]: I1124 12:45:04.467035 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44"] Nov 24 12:45:04 crc kubenswrapper[4930]: I1124 12:45:04.475174 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-rkv44"] Nov 24 12:45:06 crc kubenswrapper[4930]: I1124 12:45:06.102805 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea65b02d-9e8a-4089-b867-d1c7cfb70df5" path="/var/lib/kubelet/pods/ea65b02d-9e8a-4089-b867-d1c7cfb70df5/volumes" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.007321 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dck4g"] Nov 24 12:45:14 crc kubenswrapper[4930]: E1124 12:45:14.008677 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018a5555-8caf-406b-b858-5cd36d325445" containerName="collect-profiles" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.008697 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="018a5555-8caf-406b-b858-5cd36d325445" containerName="collect-profiles" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.009060 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="018a5555-8caf-406b-b858-5cd36d325445" containerName="collect-profiles" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.011178 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.029964 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dck4g"] Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.182868 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15734d21-7620-42df-bc4a-b9fd5db7162a-catalog-content\") pod \"community-operators-dck4g\" (UID: \"15734d21-7620-42df-bc4a-b9fd5db7162a\") " pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.183481 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15734d21-7620-42df-bc4a-b9fd5db7162a-utilities\") pod \"community-operators-dck4g\" (UID: \"15734d21-7620-42df-bc4a-b9fd5db7162a\") " pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.183575 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djph9\" (UniqueName: \"kubernetes.io/projected/15734d21-7620-42df-bc4a-b9fd5db7162a-kube-api-access-djph9\") pod \"community-operators-dck4g\" (UID: \"15734d21-7620-42df-bc4a-b9fd5db7162a\") " pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.285815 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15734d21-7620-42df-bc4a-b9fd5db7162a-utilities\") pod \"community-operators-dck4g\" (UID: \"15734d21-7620-42df-bc4a-b9fd5db7162a\") " pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.286154 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djph9\" (UniqueName: \"kubernetes.io/projected/15734d21-7620-42df-bc4a-b9fd5db7162a-kube-api-access-djph9\") pod \"community-operators-dck4g\" (UID: \"15734d21-7620-42df-bc4a-b9fd5db7162a\") " pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.286278 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15734d21-7620-42df-bc4a-b9fd5db7162a-catalog-content\") pod \"community-operators-dck4g\" (UID: \"15734d21-7620-42df-bc4a-b9fd5db7162a\") " pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.286862 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15734d21-7620-42df-bc4a-b9fd5db7162a-utilities\") pod \"community-operators-dck4g\" (UID: \"15734d21-7620-42df-bc4a-b9fd5db7162a\") " pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.287239 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15734d21-7620-42df-bc4a-b9fd5db7162a-catalog-content\") pod \"community-operators-dck4g\" (UID: \"15734d21-7620-42df-bc4a-b9fd5db7162a\") " pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.316787 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djph9\" (UniqueName: \"kubernetes.io/projected/15734d21-7620-42df-bc4a-b9fd5db7162a-kube-api-access-djph9\") pod \"community-operators-dck4g\" (UID: \"15734d21-7620-42df-bc4a-b9fd5db7162a\") " pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.340832 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:14 crc kubenswrapper[4930]: I1124 12:45:14.908890 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dck4g"] Nov 24 12:45:14 crc kubenswrapper[4930]: W1124 12:45:14.909236 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15734d21_7620_42df_bc4a_b9fd5db7162a.slice/crio-dd251158ebc01804e97fa7edcd9119105d1268a50ff71d1060d8c85303086d87 WatchSource:0}: Error finding container dd251158ebc01804e97fa7edcd9119105d1268a50ff71d1060d8c85303086d87: Status 404 returned error can't find the container with id dd251158ebc01804e97fa7edcd9119105d1268a50ff71d1060d8c85303086d87 Nov 24 12:45:15 crc kubenswrapper[4930]: I1124 12:45:15.097237 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dck4g" event={"ID":"15734d21-7620-42df-bc4a-b9fd5db7162a","Type":"ContainerStarted","Data":"dd251158ebc01804e97fa7edcd9119105d1268a50ff71d1060d8c85303086d87"} Nov 24 12:45:15 crc kubenswrapper[4930]: I1124 12:45:15.802160 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k426s"] Nov 24 12:45:15 crc kubenswrapper[4930]: I1124 12:45:15.805181 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:15 crc kubenswrapper[4930]: I1124 12:45:15.819235 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k426s"] Nov 24 12:45:15 crc kubenswrapper[4930]: I1124 12:45:15.920612 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-utilities\") pod \"redhat-marketplace-k426s\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:15 crc kubenswrapper[4930]: I1124 12:45:15.920769 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-catalog-content\") pod \"redhat-marketplace-k426s\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:15 crc kubenswrapper[4930]: I1124 12:45:15.920958 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nqp7\" (UniqueName: \"kubernetes.io/projected/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-kube-api-access-9nqp7\") pod \"redhat-marketplace-k426s\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:16 crc kubenswrapper[4930]: I1124 12:45:16.023856 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-catalog-content\") pod \"redhat-marketplace-k426s\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:16 crc kubenswrapper[4930]: I1124 12:45:16.023981 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nqp7\" (UniqueName: \"kubernetes.io/projected/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-kube-api-access-9nqp7\") pod \"redhat-marketplace-k426s\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:16 crc kubenswrapper[4930]: I1124 12:45:16.024167 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-utilities\") pod \"redhat-marketplace-k426s\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:16 crc kubenswrapper[4930]: I1124 12:45:16.024578 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-catalog-content\") pod \"redhat-marketplace-k426s\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:16 crc kubenswrapper[4930]: I1124 12:45:16.024795 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-utilities\") pod \"redhat-marketplace-k426s\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:16 crc kubenswrapper[4930]: I1124 12:45:16.049512 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nqp7\" (UniqueName: \"kubernetes.io/projected/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-kube-api-access-9nqp7\") pod \"redhat-marketplace-k426s\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:16 crc kubenswrapper[4930]: I1124 12:45:16.107584 4930 generic.go:334] "Generic (PLEG): container finished" podID="15734d21-7620-42df-bc4a-b9fd5db7162a" containerID="3875d1db9ebc577aa775d7bb4bf2dda4894a9b786a023e01b43c38c505c16767" exitCode=0 Nov 24 12:45:16 crc kubenswrapper[4930]: I1124 12:45:16.107635 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dck4g" event={"ID":"15734d21-7620-42df-bc4a-b9fd5db7162a","Type":"ContainerDied","Data":"3875d1db9ebc577aa775d7bb4bf2dda4894a9b786a023e01b43c38c505c16767"} Nov 24 12:45:16 crc kubenswrapper[4930]: I1124 12:45:16.146483 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:16 crc kubenswrapper[4930]: I1124 12:45:16.673526 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k426s"] Nov 24 12:45:17 crc kubenswrapper[4930]: I1124 12:45:17.120180 4930 generic.go:334] "Generic (PLEG): container finished" podID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerID="4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14" exitCode=0 Nov 24 12:45:17 crc kubenswrapper[4930]: I1124 12:45:17.120290 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k426s" event={"ID":"0bb14ea0-3fc8-4a49-b322-c2d52c29103d","Type":"ContainerDied","Data":"4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14"} Nov 24 12:45:17 crc kubenswrapper[4930]: I1124 12:45:17.120847 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k426s" event={"ID":"0bb14ea0-3fc8-4a49-b322-c2d52c29103d","Type":"ContainerStarted","Data":"2ac7368911d2025fb69db5c2b52a6a58683d86c8a5f825697e78fab55e60f3b9"} Nov 24 12:45:19 crc kubenswrapper[4930]: I1124 12:45:19.140600 4930 generic.go:334] "Generic (PLEG): container finished" podID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerID="953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c" exitCode=0 Nov 24 12:45:19 crc kubenswrapper[4930]: I1124 12:45:19.140701 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k426s" event={"ID":"0bb14ea0-3fc8-4a49-b322-c2d52c29103d","Type":"ContainerDied","Data":"953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c"} Nov 24 12:45:20 crc kubenswrapper[4930]: I1124 12:45:20.076008 4930 scope.go:117] "RemoveContainer" containerID="1e3b697adb5b8968f2e3cb95e2c09bbd89ced4a8621a4c7af9b675ed29884dfc" Nov 24 12:45:20 crc kubenswrapper[4930]: I1124 12:45:20.155242 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k426s" event={"ID":"0bb14ea0-3fc8-4a49-b322-c2d52c29103d","Type":"ContainerStarted","Data":"4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63"} Nov 24 12:45:20 crc kubenswrapper[4930]: I1124 12:45:20.159163 4930 generic.go:334] "Generic (PLEG): container finished" podID="15734d21-7620-42df-bc4a-b9fd5db7162a" containerID="8ed43fe9903cd0c93ec503ea895db15fed0fff7696970a2c64f371d3006a6df4" exitCode=0 Nov 24 12:45:20 crc kubenswrapper[4930]: I1124 12:45:20.159212 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dck4g" event={"ID":"15734d21-7620-42df-bc4a-b9fd5db7162a","Type":"ContainerDied","Data":"8ed43fe9903cd0c93ec503ea895db15fed0fff7696970a2c64f371d3006a6df4"} Nov 24 12:45:20 crc kubenswrapper[4930]: I1124 12:45:20.175816 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k426s" podStartSLOduration=2.485935029 podStartE2EDuration="5.175799051s" podCreationTimestamp="2025-11-24 12:45:15 +0000 UTC" firstStartedPulling="2025-11-24 12:45:17.125813109 +0000 UTC m=+2763.740141059" lastFinishedPulling="2025-11-24 12:45:19.815677141 +0000 UTC m=+2766.430005081" observedRunningTime="2025-11-24 12:45:20.169301504 +0000 UTC m=+2766.783629444" watchObservedRunningTime="2025-11-24 12:45:20.175799051 +0000 UTC m=+2766.790127001" Nov 24 12:45:21 crc kubenswrapper[4930]: I1124 12:45:21.173857 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dck4g" event={"ID":"15734d21-7620-42df-bc4a-b9fd5db7162a","Type":"ContainerStarted","Data":"77d8c3524e14f051c28d9d45bc264e831b50d913d9285a716ecd6de5eddbe3bf"} Nov 24 12:45:21 crc kubenswrapper[4930]: I1124 12:45:21.201958 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dck4g" podStartSLOduration=3.775389819 podStartE2EDuration="8.201939042s" podCreationTimestamp="2025-11-24 12:45:13 +0000 UTC" firstStartedPulling="2025-11-24 12:45:16.110761178 +0000 UTC m=+2762.725089128" lastFinishedPulling="2025-11-24 12:45:20.537310381 +0000 UTC m=+2767.151638351" observedRunningTime="2025-11-24 12:45:21.197791422 +0000 UTC m=+2767.812119422" watchObservedRunningTime="2025-11-24 12:45:21.201939042 +0000 UTC m=+2767.816266982" Nov 24 12:45:24 crc kubenswrapper[4930]: I1124 12:45:24.341806 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:24 crc kubenswrapper[4930]: I1124 12:45:24.342650 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:24 crc kubenswrapper[4930]: I1124 12:45:24.398595 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:25 crc kubenswrapper[4930]: I1124 12:45:25.276012 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dck4g" Nov 24 12:45:25 crc kubenswrapper[4930]: I1124 12:45:25.420879 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dck4g"] Nov 24 12:45:25 crc kubenswrapper[4930]: I1124 12:45:25.596989 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hj8d"] Nov 24 12:45:25 crc kubenswrapper[4930]: I1124 12:45:25.597267 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7hj8d" podUID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerName="registry-server" containerID="cri-o://f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0" gracePeriod=2 Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.147163 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.148621 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.163744 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.243648 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.272295 4930 generic.go:334] "Generic (PLEG): container finished" podID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerID="f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0" exitCode=0 Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.274921 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hj8d" event={"ID":"a6f3efd2-4683-4fab-9749-803e98a00cd2","Type":"ContainerDied","Data":"f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0"} Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.274979 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hj8d" event={"ID":"a6f3efd2-4683-4fab-9749-803e98a00cd2","Type":"ContainerDied","Data":"46d5101e7f9fe4c2a2cf06c96c1915c71021eff9f5ea5cb73036249d3a6b469b"} Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.274996 4930 scope.go:117] "RemoveContainer" containerID="f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.275186 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hj8d" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.314238 4930 scope.go:117] "RemoveContainer" containerID="43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.337290 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2cst\" (UniqueName: \"kubernetes.io/projected/a6f3efd2-4683-4fab-9749-803e98a00cd2-kube-api-access-v2cst\") pod \"a6f3efd2-4683-4fab-9749-803e98a00cd2\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.337734 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-utilities\") pod \"a6f3efd2-4683-4fab-9749-803e98a00cd2\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.338089 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-catalog-content\") pod \"a6f3efd2-4683-4fab-9749-803e98a00cd2\" (UID: \"a6f3efd2-4683-4fab-9749-803e98a00cd2\") " Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.340911 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-utilities" (OuterVolumeSpecName: "utilities") pod "a6f3efd2-4683-4fab-9749-803e98a00cd2" (UID: "a6f3efd2-4683-4fab-9749-803e98a00cd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.352110 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f3efd2-4683-4fab-9749-803e98a00cd2-kube-api-access-v2cst" (OuterVolumeSpecName: "kube-api-access-v2cst") pod "a6f3efd2-4683-4fab-9749-803e98a00cd2" (UID: "a6f3efd2-4683-4fab-9749-803e98a00cd2"). InnerVolumeSpecName "kube-api-access-v2cst". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.364141 4930 scope.go:117] "RemoveContainer" containerID="d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.373123 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.434066 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6f3efd2-4683-4fab-9749-803e98a00cd2" (UID: "a6f3efd2-4683-4fab-9749-803e98a00cd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.439928 4930 scope.go:117] "RemoveContainer" containerID="f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0" Nov 24 12:45:26 crc kubenswrapper[4930]: E1124 12:45:26.440832 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0\": container with ID starting with f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0 not found: ID does not exist" containerID="f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.440876 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0"} err="failed to get container status \"f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0\": rpc error: code = NotFound desc = could not find container \"f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0\": container with ID starting with f75a145d398c84d26e059edc0a4978702a5423eb00bdcee195552b3e0399d7e0 not found: ID does not exist" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.440914 4930 scope.go:117] "RemoveContainer" containerID="43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8" Nov 24 12:45:26 crc kubenswrapper[4930]: E1124 12:45:26.441463 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8\": container with ID starting with 43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8 not found: ID does not exist" containerID="43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.441491 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8"} err="failed to get container status \"43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8\": rpc error: code = NotFound desc = could not find container \"43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8\": container with ID starting with 43f113bc046cd5244ccd8a68bda652ff0faed6e288b0c61993d1b477a80de4f8 not found: ID does not exist" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.441506 4930 scope.go:117] "RemoveContainer" containerID="d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7" Nov 24 12:45:26 crc kubenswrapper[4930]: E1124 12:45:26.442000 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7\": container with ID starting with d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7 not found: ID does not exist" containerID="d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.442032 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7"} err="failed to get container status \"d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7\": rpc error: code = NotFound desc = could not find container \"d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7\": container with ID starting with d60f9ba1bff8251191d56e33e538a3f2522f47b3418f5dc349ce80e3be0670f7 not found: ID does not exist" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.443149 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.443197 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2cst\" (UniqueName: \"kubernetes.io/projected/a6f3efd2-4683-4fab-9749-803e98a00cd2-kube-api-access-v2cst\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.443216 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f3efd2-4683-4fab-9749-803e98a00cd2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.629275 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hj8d"] Nov 24 12:45:26 crc kubenswrapper[4930]: I1124 12:45:26.638663 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7hj8d"] Nov 24 12:45:28 crc kubenswrapper[4930]: I1124 12:45:28.112015 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f3efd2-4683-4fab-9749-803e98a00cd2" path="/var/lib/kubelet/pods/a6f3efd2-4683-4fab-9749-803e98a00cd2/volumes" Nov 24 12:45:28 crc kubenswrapper[4930]: I1124 12:45:28.594184 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k426s"] Nov 24 12:45:29 crc kubenswrapper[4930]: I1124 12:45:29.311187 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k426s" podUID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerName="registry-server" containerID="cri-o://4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63" gracePeriod=2 Nov 24 12:45:29 crc kubenswrapper[4930]: I1124 12:45:29.833128 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:29 crc kubenswrapper[4930]: I1124 12:45:29.935360 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nqp7\" (UniqueName: \"kubernetes.io/projected/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-kube-api-access-9nqp7\") pod \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " Nov 24 12:45:29 crc kubenswrapper[4930]: I1124 12:45:29.935424 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-catalog-content\") pod \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " Nov 24 12:45:29 crc kubenswrapper[4930]: I1124 12:45:29.935488 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-utilities\") pod \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\" (UID: \"0bb14ea0-3fc8-4a49-b322-c2d52c29103d\") " Nov 24 12:45:29 crc kubenswrapper[4930]: I1124 12:45:29.936259 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-utilities" (OuterVolumeSpecName: "utilities") pod "0bb14ea0-3fc8-4a49-b322-c2d52c29103d" (UID: "0bb14ea0-3fc8-4a49-b322-c2d52c29103d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:29 crc kubenswrapper[4930]: I1124 12:45:29.943347 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-kube-api-access-9nqp7" (OuterVolumeSpecName: "kube-api-access-9nqp7") pod "0bb14ea0-3fc8-4a49-b322-c2d52c29103d" (UID: "0bb14ea0-3fc8-4a49-b322-c2d52c29103d"). InnerVolumeSpecName "kube-api-access-9nqp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:29 crc kubenswrapper[4930]: I1124 12:45:29.953962 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bb14ea0-3fc8-4a49-b322-c2d52c29103d" (UID: "0bb14ea0-3fc8-4a49-b322-c2d52c29103d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.038244 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nqp7\" (UniqueName: \"kubernetes.io/projected/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-kube-api-access-9nqp7\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.038286 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.038300 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb14ea0-3fc8-4a49-b322-c2d52c29103d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.321647 4930 generic.go:334] "Generic (PLEG): container finished" podID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerID="4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63" exitCode=0 Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.321690 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k426s" event={"ID":"0bb14ea0-3fc8-4a49-b322-c2d52c29103d","Type":"ContainerDied","Data":"4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63"} Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.321718 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k426s" event={"ID":"0bb14ea0-3fc8-4a49-b322-c2d52c29103d","Type":"ContainerDied","Data":"2ac7368911d2025fb69db5c2b52a6a58683d86c8a5f825697e78fab55e60f3b9"} Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.321720 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k426s" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.321736 4930 scope.go:117] "RemoveContainer" containerID="4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.355097 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k426s"] Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.362819 4930 scope.go:117] "RemoveContainer" containerID="953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.371729 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k426s"] Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.385503 4930 scope.go:117] "RemoveContainer" containerID="4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.427823 4930 scope.go:117] "RemoveContainer" containerID="4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63" Nov 24 12:45:30 crc kubenswrapper[4930]: E1124 12:45:30.428329 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63\": container with ID starting with 4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63 not found: ID does not exist" containerID="4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.428376 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63"} err="failed to get container status \"4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63\": rpc error: code = NotFound desc = could not find container \"4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63\": container with ID starting with 4c39ae2de853a15472f34802dc5ed83501b893a0ec1a6c4d486e584720024d63 not found: ID does not exist" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.428411 4930 scope.go:117] "RemoveContainer" containerID="953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c" Nov 24 12:45:30 crc kubenswrapper[4930]: E1124 12:45:30.428976 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c\": container with ID starting with 953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c not found: ID does not exist" containerID="953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.429026 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c"} err="failed to get container status \"953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c\": rpc error: code = NotFound desc = could not find container \"953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c\": container with ID starting with 953d2fab990d5b67766de31837426929d2d7108b7fc60a91b7b6e191ad9d801c not found: ID does not exist" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.429062 4930 scope.go:117] "RemoveContainer" containerID="4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14" Nov 24 12:45:30 crc kubenswrapper[4930]: E1124 12:45:30.429402 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14\": container with ID starting with 4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14 not found: ID does not exist" containerID="4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14" Nov 24 12:45:30 crc kubenswrapper[4930]: I1124 12:45:30.429437 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14"} err="failed to get container status \"4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14\": rpc error: code = NotFound desc = could not find container \"4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14\": container with ID starting with 4ed6addf60330840dbdf73e7c34f2f007f69ef9788016520264441b46f3a1c14 not found: ID does not exist" Nov 24 12:45:32 crc kubenswrapper[4930]: I1124 12:45:32.094467 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" path="/var/lib/kubelet/pods/0bb14ea0-3fc8-4a49-b322-c2d52c29103d/volumes" Nov 24 12:47:01 crc kubenswrapper[4930]: I1124 12:47:01.809406 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:47:01 crc kubenswrapper[4930]: I1124 12:47:01.810126 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:47:31 crc kubenswrapper[4930]: I1124 12:47:31.809565 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:47:31 crc kubenswrapper[4930]: I1124 12:47:31.810808 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:48:01 crc kubenswrapper[4930]: I1124 12:48:01.809295 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:48:01 crc kubenswrapper[4930]: I1124 12:48:01.809921 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:48:01 crc kubenswrapper[4930]: I1124 12:48:01.809985 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:48:01 crc kubenswrapper[4930]: I1124 12:48:01.810966 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d8009f0d50af8bcd32af8108bb1f6f40bc204c198cc350825e7feae500f7e1e"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:48:01 crc kubenswrapper[4930]: I1124 12:48:01.811145 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://3d8009f0d50af8bcd32af8108bb1f6f40bc204c198cc350825e7feae500f7e1e" gracePeriod=600 Nov 24 12:48:02 crc kubenswrapper[4930]: I1124 12:48:02.777482 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="3d8009f0d50af8bcd32af8108bb1f6f40bc204c198cc350825e7feae500f7e1e" exitCode=0 Nov 24 12:48:02 crc kubenswrapper[4930]: I1124 12:48:02.777630 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"3d8009f0d50af8bcd32af8108bb1f6f40bc204c198cc350825e7feae500f7e1e"} Nov 24 12:48:02 crc kubenswrapper[4930]: I1124 12:48:02.778174 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505"} Nov 24 12:48:02 crc kubenswrapper[4930]: I1124 12:48:02.778211 4930 scope.go:117] "RemoveContainer" containerID="da5a1303f8ddf60bc4746b18a863825b2fa5d356dfa128f571ad03c6c1aa5d5b" Nov 24 12:48:20 crc kubenswrapper[4930]: I1124 12:48:20.196211 4930 scope.go:117] "RemoveContainer" containerID="1dcb47ce131c7fe4f7e30976413565510b8f660f63deac42098c3072438040ed" Nov 24 12:48:20 crc kubenswrapper[4930]: I1124 12:48:20.245803 4930 scope.go:117] "RemoveContainer" containerID="3cf63bde00cb7929671ba21cbdb6b09db17ce5be84fd59f8715988bbc5b1ed29" Nov 24 12:48:20 crc kubenswrapper[4930]: I1124 12:48:20.276972 4930 scope.go:117] "RemoveContainer" containerID="d946f4e2a32d1f2e76b5220961d3f1959dfb6a7ea293d0d113fe7809dbae345a" Nov 24 12:50:31 crc kubenswrapper[4930]: I1124 12:50:31.809129 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:50:31 crc kubenswrapper[4930]: I1124 12:50:31.809791 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:51:01 crc kubenswrapper[4930]: I1124 12:51:01.809883 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:51:01 crc kubenswrapper[4930]: I1124 12:51:01.810661 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:51:31 crc kubenswrapper[4930]: I1124 12:51:31.809515 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:51:31 crc kubenswrapper[4930]: I1124 12:51:31.810186 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:51:31 crc kubenswrapper[4930]: I1124 12:51:31.810245 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 12:51:31 crc kubenswrapper[4930]: I1124 12:51:31.811864 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:51:31 crc kubenswrapper[4930]: I1124 12:51:31.811975 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" gracePeriod=600 Nov 24 12:51:31 crc kubenswrapper[4930]: E1124 12:51:31.935088 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:51:31 crc kubenswrapper[4930]: I1124 12:51:31.944791 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" exitCode=0 Nov 24 12:51:31 crc kubenswrapper[4930]: I1124 12:51:31.944848 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505"} Nov 24 12:51:31 crc kubenswrapper[4930]: I1124 12:51:31.944900 4930 scope.go:117] "RemoveContainer" containerID="3d8009f0d50af8bcd32af8108bb1f6f40bc204c198cc350825e7feae500f7e1e" Nov 24 12:51:31 crc kubenswrapper[4930]: I1124 12:51:31.946008 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:51:31 crc kubenswrapper[4930]: E1124 12:51:31.946466 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:51:46 crc kubenswrapper[4930]: I1124 12:51:46.085463 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:51:46 crc kubenswrapper[4930]: E1124 12:51:46.086961 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:52:01 crc kubenswrapper[4930]: I1124 12:52:01.084533 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:52:01 crc kubenswrapper[4930]: E1124 12:52:01.085257 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:52:14 crc kubenswrapper[4930]: I1124 12:52:14.091320 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:52:14 crc kubenswrapper[4930]: E1124 12:52:14.092279 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:52:27 crc kubenswrapper[4930]: I1124 12:52:27.085717 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:52:27 crc kubenswrapper[4930]: E1124 12:52:27.087005 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.682602 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kvm4p"] Nov 24 12:52:35 crc kubenswrapper[4930]: E1124 12:52:35.684937 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerName="extract-utilities" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.684959 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerName="extract-utilities" Nov 24 12:52:35 crc kubenswrapper[4930]: E1124 12:52:35.684971 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerName="registry-server" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.684979 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerName="registry-server" Nov 24 12:52:35 crc kubenswrapper[4930]: E1124 12:52:35.684991 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerName="extract-utilities" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.684999 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerName="extract-utilities" Nov 24 12:52:35 crc kubenswrapper[4930]: E1124 12:52:35.685022 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerName="registry-server" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.685032 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerName="registry-server" Nov 24 12:52:35 crc kubenswrapper[4930]: E1124 12:52:35.685055 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerName="extract-content" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.685063 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerName="extract-content" Nov 24 12:52:35 crc kubenswrapper[4930]: E1124 12:52:35.685080 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerName="extract-content" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.685087 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerName="extract-content" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.685339 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb14ea0-3fc8-4a49-b322-c2d52c29103d" containerName="registry-server" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.685365 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f3efd2-4683-4fab-9749-803e98a00cd2" containerName="registry-server" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.687367 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.702394 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kvm4p"] Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.767400 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-catalog-content\") pod \"redhat-operators-kvm4p\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.767517 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7jp\" (UniqueName: \"kubernetes.io/projected/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-kube-api-access-mp7jp\") pod \"redhat-operators-kvm4p\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.767895 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-utilities\") pod \"redhat-operators-kvm4p\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.869530 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-utilities\") pod \"redhat-operators-kvm4p\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.869592 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-catalog-content\") pod \"redhat-operators-kvm4p\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.869646 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp7jp\" (UniqueName: \"kubernetes.io/projected/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-kube-api-access-mp7jp\") pod \"redhat-operators-kvm4p\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.870116 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-utilities\") pod \"redhat-operators-kvm4p\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.870203 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-catalog-content\") pod \"redhat-operators-kvm4p\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:35 crc kubenswrapper[4930]: I1124 12:52:35.890432 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp7jp\" (UniqueName: \"kubernetes.io/projected/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-kube-api-access-mp7jp\") pod \"redhat-operators-kvm4p\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:36 crc kubenswrapper[4930]: I1124 12:52:36.026835 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:36 crc kubenswrapper[4930]: I1124 12:52:36.481397 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kvm4p"] Nov 24 12:52:36 crc kubenswrapper[4930]: I1124 12:52:36.550389 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvm4p" event={"ID":"b0cb00d3-7146-4cb9-918b-a7abecb2ccda","Type":"ContainerStarted","Data":"3116ef35dbb18a38733dd98a7066cf5044f1dc1a226b771c9d81f719aeb98c6b"} Nov 24 12:52:37 crc kubenswrapper[4930]: I1124 12:52:37.562180 4930 generic.go:334] "Generic (PLEG): container finished" podID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerID="bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02" exitCode=0 Nov 24 12:52:37 crc kubenswrapper[4930]: I1124 12:52:37.562331 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvm4p" event={"ID":"b0cb00d3-7146-4cb9-918b-a7abecb2ccda","Type":"ContainerDied","Data":"bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02"} Nov 24 12:52:37 crc kubenswrapper[4930]: I1124 12:52:37.564381 4930 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:52:39 crc kubenswrapper[4930]: I1124 12:52:39.580526 4930 generic.go:334] "Generic (PLEG): container finished" podID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerID="faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6" exitCode=0 Nov 24 12:52:39 crc kubenswrapper[4930]: I1124 12:52:39.580568 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvm4p" event={"ID":"b0cb00d3-7146-4cb9-918b-a7abecb2ccda","Type":"ContainerDied","Data":"faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6"} Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.470699 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kpxwx"] Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.473975 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.484921 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kpxwx"] Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.562461 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-utilities\") pod \"certified-operators-kpxwx\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.562532 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-catalog-content\") pod \"certified-operators-kpxwx\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.562673 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgnkf\" (UniqueName: \"kubernetes.io/projected/102b7539-783a-45a5-9ab5-ed67af4e677f-kube-api-access-xgnkf\") pod \"certified-operators-kpxwx\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.666910 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-utilities\") pod \"certified-operators-kpxwx\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.667292 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-catalog-content\") pod \"certified-operators-kpxwx\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.667607 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgnkf\" (UniqueName: \"kubernetes.io/projected/102b7539-783a-45a5-9ab5-ed67af4e677f-kube-api-access-xgnkf\") pod \"certified-operators-kpxwx\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.668030 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-utilities\") pod \"certified-operators-kpxwx\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.668176 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-catalog-content\") pod \"certified-operators-kpxwx\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.693480 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgnkf\" (UniqueName: \"kubernetes.io/projected/102b7539-783a-45a5-9ab5-ed67af4e677f-kube-api-access-xgnkf\") pod \"certified-operators-kpxwx\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:40 crc kubenswrapper[4930]: I1124 12:52:40.809915 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:41 crc kubenswrapper[4930]: I1124 12:52:41.294634 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kpxwx"] Nov 24 12:52:41 crc kubenswrapper[4930]: W1124 12:52:41.301564 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod102b7539_783a_45a5_9ab5_ed67af4e677f.slice/crio-e3041d7b944bc223a661655cf0641d90dc852ab82fcaaf4577b3d747a2068505 WatchSource:0}: Error finding container e3041d7b944bc223a661655cf0641d90dc852ab82fcaaf4577b3d747a2068505: Status 404 returned error can't find the container with id e3041d7b944bc223a661655cf0641d90dc852ab82fcaaf4577b3d747a2068505 Nov 24 12:52:41 crc kubenswrapper[4930]: I1124 12:52:41.605931 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvm4p" event={"ID":"b0cb00d3-7146-4cb9-918b-a7abecb2ccda","Type":"ContainerStarted","Data":"fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8"} Nov 24 12:52:41 crc kubenswrapper[4930]: I1124 12:52:41.608063 4930 generic.go:334] "Generic (PLEG): container finished" podID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerID="be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770" exitCode=0 Nov 24 12:52:41 crc kubenswrapper[4930]: I1124 12:52:41.608112 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpxwx" event={"ID":"102b7539-783a-45a5-9ab5-ed67af4e677f","Type":"ContainerDied","Data":"be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770"} Nov 24 12:52:41 crc kubenswrapper[4930]: I1124 12:52:41.608141 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpxwx" event={"ID":"102b7539-783a-45a5-9ab5-ed67af4e677f","Type":"ContainerStarted","Data":"e3041d7b944bc223a661655cf0641d90dc852ab82fcaaf4577b3d747a2068505"} Nov 24 12:52:41 crc kubenswrapper[4930]: I1124 12:52:41.654627 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kvm4p" podStartSLOduration=3.218202302 podStartE2EDuration="6.654604319s" podCreationTimestamp="2025-11-24 12:52:35 +0000 UTC" firstStartedPulling="2025-11-24 12:52:37.56408791 +0000 UTC m=+3204.178415861" lastFinishedPulling="2025-11-24 12:52:41.000489908 +0000 UTC m=+3207.614817878" observedRunningTime="2025-11-24 12:52:41.631237557 +0000 UTC m=+3208.245565507" watchObservedRunningTime="2025-11-24 12:52:41.654604319 +0000 UTC m=+3208.268932269" Nov 24 12:52:42 crc kubenswrapper[4930]: I1124 12:52:42.084715 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:52:42 crc kubenswrapper[4930]: E1124 12:52:42.085046 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:52:42 crc kubenswrapper[4930]: I1124 12:52:42.619382 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpxwx" event={"ID":"102b7539-783a-45a5-9ab5-ed67af4e677f","Type":"ContainerStarted","Data":"12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9"} Nov 24 12:52:43 crc kubenswrapper[4930]: I1124 12:52:43.632712 4930 generic.go:334] "Generic (PLEG): container finished" podID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerID="12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9" exitCode=0 Nov 24 12:52:43 crc kubenswrapper[4930]: I1124 12:52:43.632801 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpxwx" event={"ID":"102b7539-783a-45a5-9ab5-ed67af4e677f","Type":"ContainerDied","Data":"12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9"} Nov 24 12:52:44 crc kubenswrapper[4930]: I1124 12:52:44.648954 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpxwx" event={"ID":"102b7539-783a-45a5-9ab5-ed67af4e677f","Type":"ContainerStarted","Data":"fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57"} Nov 24 12:52:44 crc kubenswrapper[4930]: I1124 12:52:44.680259 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kpxwx" podStartSLOduration=2.093360193 podStartE2EDuration="4.680226495s" podCreationTimestamp="2025-11-24 12:52:40 +0000 UTC" firstStartedPulling="2025-11-24 12:52:41.609434811 +0000 UTC m=+3208.223762761" lastFinishedPulling="2025-11-24 12:52:44.196301113 +0000 UTC m=+3210.810629063" observedRunningTime="2025-11-24 12:52:44.667287363 +0000 UTC m=+3211.281615313" watchObservedRunningTime="2025-11-24 12:52:44.680226495 +0000 UTC m=+3211.294554445" Nov 24 12:52:46 crc kubenswrapper[4930]: I1124 12:52:46.027493 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:46 crc kubenswrapper[4930]: I1124 12:52:46.027936 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:47 crc kubenswrapper[4930]: I1124 12:52:47.081224 4930 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kvm4p" podUID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerName="registry-server" probeResult="failure" output=< Nov 24 12:52:47 crc kubenswrapper[4930]: timeout: failed to connect service ":50051" within 1s Nov 24 12:52:47 crc kubenswrapper[4930]: > Nov 24 12:52:50 crc kubenswrapper[4930]: I1124 12:52:50.810588 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:50 crc kubenswrapper[4930]: I1124 12:52:50.811129 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:50 crc kubenswrapper[4930]: I1124 12:52:50.854701 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:51 crc kubenswrapper[4930]: I1124 12:52:51.759744 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:51 crc kubenswrapper[4930]: I1124 12:52:51.804509 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kpxwx"] Nov 24 12:52:53 crc kubenswrapper[4930]: I1124 12:52:53.729167 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kpxwx" podUID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerName="registry-server" containerID="cri-o://fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57" gracePeriod=2 Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.250812 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.432246 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-catalog-content\") pod \"102b7539-783a-45a5-9ab5-ed67af4e677f\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.432794 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-utilities\") pod \"102b7539-783a-45a5-9ab5-ed67af4e677f\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.433587 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-utilities" (OuterVolumeSpecName: "utilities") pod "102b7539-783a-45a5-9ab5-ed67af4e677f" (UID: "102b7539-783a-45a5-9ab5-ed67af4e677f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.433819 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgnkf\" (UniqueName: \"kubernetes.io/projected/102b7539-783a-45a5-9ab5-ed67af4e677f-kube-api-access-xgnkf\") pod \"102b7539-783a-45a5-9ab5-ed67af4e677f\" (UID: \"102b7539-783a-45a5-9ab5-ed67af4e677f\") " Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.434993 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.441392 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102b7539-783a-45a5-9ab5-ed67af4e677f-kube-api-access-xgnkf" (OuterVolumeSpecName: "kube-api-access-xgnkf") pod "102b7539-783a-45a5-9ab5-ed67af4e677f" (UID: "102b7539-783a-45a5-9ab5-ed67af4e677f"). InnerVolumeSpecName "kube-api-access-xgnkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.485256 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "102b7539-783a-45a5-9ab5-ed67af4e677f" (UID: "102b7539-783a-45a5-9ab5-ed67af4e677f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.536308 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102b7539-783a-45a5-9ab5-ed67af4e677f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.536355 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgnkf\" (UniqueName: \"kubernetes.io/projected/102b7539-783a-45a5-9ab5-ed67af4e677f-kube-api-access-xgnkf\") on node \"crc\" DevicePath \"\"" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.744152 4930 generic.go:334] "Generic (PLEG): container finished" podID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerID="fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57" exitCode=0 Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.744205 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpxwx" event={"ID":"102b7539-783a-45a5-9ab5-ed67af4e677f","Type":"ContainerDied","Data":"fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57"} Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.744240 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpxwx" event={"ID":"102b7539-783a-45a5-9ab5-ed67af4e677f","Type":"ContainerDied","Data":"e3041d7b944bc223a661655cf0641d90dc852ab82fcaaf4577b3d747a2068505"} Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.744260 4930 scope.go:117] "RemoveContainer" containerID="fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.744391 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpxwx" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.778383 4930 scope.go:117] "RemoveContainer" containerID="12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.790703 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kpxwx"] Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.795428 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kpxwx"] Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.804393 4930 scope.go:117] "RemoveContainer" containerID="be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.849025 4930 scope.go:117] "RemoveContainer" containerID="fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57" Nov 24 12:52:54 crc kubenswrapper[4930]: E1124 12:52:54.849480 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57\": container with ID starting with fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57 not found: ID does not exist" containerID="fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.849556 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57"} err="failed to get container status \"fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57\": rpc error: code = NotFound desc = could not find container \"fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57\": container with ID starting with fcaf6d8acebb289ea58d2cc802fb95e5df3a3b30023bd731f3d98417ccc12e57 not found: ID does not exist" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.849592 4930 scope.go:117] "RemoveContainer" containerID="12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9" Nov 24 12:52:54 crc kubenswrapper[4930]: E1124 12:52:54.849957 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9\": container with ID starting with 12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9 not found: ID does not exist" containerID="12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.850017 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9"} err="failed to get container status \"12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9\": rpc error: code = NotFound desc = could not find container \"12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9\": container with ID starting with 12e02c7f06d6ec8f37b8e7539536c3f818b415c7efe065e14c6793913b622ed9 not found: ID does not exist" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.850064 4930 scope.go:117] "RemoveContainer" containerID="be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770" Nov 24 12:52:54 crc kubenswrapper[4930]: E1124 12:52:54.850362 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770\": container with ID starting with be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770 not found: ID does not exist" containerID="be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770" Nov 24 12:52:54 crc kubenswrapper[4930]: I1124 12:52:54.850395 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770"} err="failed to get container status \"be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770\": rpc error: code = NotFound desc = could not find container \"be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770\": container with ID starting with be124734183a2bbdbc9813c783ed1b77e6ac455c53e84f3d6040adfaf4039770 not found: ID does not exist" Nov 24 12:52:55 crc kubenswrapper[4930]: I1124 12:52:55.085421 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:52:55 crc kubenswrapper[4930]: E1124 12:52:55.086075 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:52:56 crc kubenswrapper[4930]: I1124 12:52:56.079935 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:56 crc kubenswrapper[4930]: I1124 12:52:56.096622 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102b7539-783a-45a5-9ab5-ed67af4e677f" path="/var/lib/kubelet/pods/102b7539-783a-45a5-9ab5-ed67af4e677f/volumes" Nov 24 12:52:56 crc kubenswrapper[4930]: I1124 12:52:56.138403 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:56 crc kubenswrapper[4930]: I1124 12:52:56.487574 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kvm4p"] Nov 24 12:52:57 crc kubenswrapper[4930]: I1124 12:52:57.775502 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kvm4p" podUID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerName="registry-server" containerID="cri-o://fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8" gracePeriod=2 Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.244002 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.408192 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp7jp\" (UniqueName: \"kubernetes.io/projected/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-kube-api-access-mp7jp\") pod \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.408299 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-catalog-content\") pod \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.408334 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-utilities\") pod \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\" (UID: \"b0cb00d3-7146-4cb9-918b-a7abecb2ccda\") " Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.409210 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-utilities" (OuterVolumeSpecName: "utilities") pod "b0cb00d3-7146-4cb9-918b-a7abecb2ccda" (UID: "b0cb00d3-7146-4cb9-918b-a7abecb2ccda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.416408 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-kube-api-access-mp7jp" (OuterVolumeSpecName: "kube-api-access-mp7jp") pod "b0cb00d3-7146-4cb9-918b-a7abecb2ccda" (UID: "b0cb00d3-7146-4cb9-918b-a7abecb2ccda"). InnerVolumeSpecName "kube-api-access-mp7jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.510295 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp7jp\" (UniqueName: \"kubernetes.io/projected/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-kube-api-access-mp7jp\") on node \"crc\" DevicePath \"\"" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.510337 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.513109 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0cb00d3-7146-4cb9-918b-a7abecb2ccda" (UID: "b0cb00d3-7146-4cb9-918b-a7abecb2ccda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.612157 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cb00d3-7146-4cb9-918b-a7abecb2ccda-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.786377 4930 generic.go:334] "Generic (PLEG): container finished" podID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerID="fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8" exitCode=0 Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.786415 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvm4p" event={"ID":"b0cb00d3-7146-4cb9-918b-a7abecb2ccda","Type":"ContainerDied","Data":"fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8"} Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.786443 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvm4p" event={"ID":"b0cb00d3-7146-4cb9-918b-a7abecb2ccda","Type":"ContainerDied","Data":"3116ef35dbb18a38733dd98a7066cf5044f1dc1a226b771c9d81f719aeb98c6b"} Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.786460 4930 scope.go:117] "RemoveContainer" containerID="fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.786457 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvm4p" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.809907 4930 scope.go:117] "RemoveContainer" containerID="faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.823964 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kvm4p"] Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.835219 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kvm4p"] Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.841055 4930 scope.go:117] "RemoveContainer" containerID="bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.882169 4930 scope.go:117] "RemoveContainer" containerID="fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8" Nov 24 12:52:58 crc kubenswrapper[4930]: E1124 12:52:58.882604 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8\": container with ID starting with fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8 not found: ID does not exist" containerID="fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.882645 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8"} err="failed to get container status \"fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8\": rpc error: code = NotFound desc = could not find container \"fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8\": container with ID starting with fb584b6f95a8b1ee6f19526c55e276c6139fb57139514869025f4d7be57db9b8 not found: ID does not exist" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.882678 4930 scope.go:117] "RemoveContainer" containerID="faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6" Nov 24 12:52:58 crc kubenswrapper[4930]: E1124 12:52:58.882930 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6\": container with ID starting with faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6 not found: ID does not exist" containerID="faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.882956 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6"} err="failed to get container status \"faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6\": rpc error: code = NotFound desc = could not find container \"faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6\": container with ID starting with faa451e0a5595186221f5f9baec3c1f208f55669fb31f0a9a8f533488a2a00e6 not found: ID does not exist" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.882976 4930 scope.go:117] "RemoveContainer" containerID="bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02" Nov 24 12:52:58 crc kubenswrapper[4930]: E1124 12:52:58.883242 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02\": container with ID starting with bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02 not found: ID does not exist" containerID="bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02" Nov 24 12:52:58 crc kubenswrapper[4930]: I1124 12:52:58.883271 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02"} err="failed to get container status \"bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02\": rpc error: code = NotFound desc = could not find container \"bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02\": container with ID starting with bd2193761ffa6589db80292d6814600cba6794636363d34b9d30bb16c9dfcb02 not found: ID does not exist" Nov 24 12:53:00 crc kubenswrapper[4930]: I1124 12:53:00.106034 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" path="/var/lib/kubelet/pods/b0cb00d3-7146-4cb9-918b-a7abecb2ccda/volumes" Nov 24 12:53:10 crc kubenswrapper[4930]: I1124 12:53:10.085418 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:53:10 crc kubenswrapper[4930]: E1124 12:53:10.086253 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:53:24 crc kubenswrapper[4930]: I1124 12:53:24.091337 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:53:24 crc kubenswrapper[4930]: E1124 12:53:24.092318 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:53:38 crc kubenswrapper[4930]: I1124 12:53:38.085396 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:53:38 crc kubenswrapper[4930]: E1124 12:53:38.086502 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:53:52 crc kubenswrapper[4930]: I1124 12:53:52.085675 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:53:52 crc kubenswrapper[4930]: E1124 12:53:52.086514 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:54:04 crc kubenswrapper[4930]: I1124 12:54:04.090713 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:54:04 crc kubenswrapper[4930]: E1124 12:54:04.091426 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:54:15 crc kubenswrapper[4930]: I1124 12:54:15.085352 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:54:15 crc kubenswrapper[4930]: E1124 12:54:15.086009 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:54:26 crc kubenswrapper[4930]: I1124 12:54:26.084766 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:54:26 crc kubenswrapper[4930]: E1124 12:54:26.085622 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:54:41 crc kubenswrapper[4930]: I1124 12:54:41.085170 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:54:41 crc kubenswrapper[4930]: E1124 12:54:41.086138 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:54:56 crc kubenswrapper[4930]: I1124 12:54:56.086040 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:54:56 crc kubenswrapper[4930]: E1124 12:54:56.086930 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:55:09 crc kubenswrapper[4930]: I1124 12:55:09.086283 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:55:09 crc kubenswrapper[4930]: E1124 12:55:09.087218 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:55:19 crc kubenswrapper[4930]: I1124 12:55:19.166787 4930 generic.go:334] "Generic (PLEG): container finished" podID="6a7fbabe-a7e2-469c-b6aa-22973dd510b3" containerID="cf07a45ccc1f1ca2f2ee2d8ed45d05b2c2bcd930b07ea9515b9f4996fcc611c1" exitCode=0 Nov 24 12:55:19 crc kubenswrapper[4930]: I1124 12:55:19.166879 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6a7fbabe-a7e2-469c-b6aa-22973dd510b3","Type":"ContainerDied","Data":"cf07a45ccc1f1ca2f2ee2d8ed45d05b2c2bcd930b07ea9515b9f4996fcc611c1"} Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.085384 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:55:20 crc kubenswrapper[4930]: E1124 12:55:20.086055 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.588781 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.748981 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.749711 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-workdir\") pod \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.749795 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config\") pod \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.749838 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-config-data\") pod \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.749944 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ssh-key\") pod \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.749973 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config-secret\") pod \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.749997 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ca-certs\") pod \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.750016 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-temporary\") pod \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.750066 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2wg7\" (UniqueName: \"kubernetes.io/projected/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-kube-api-access-n2wg7\") pod \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\" (UID: \"6a7fbabe-a7e2-469c-b6aa-22973dd510b3\") " Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.750575 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6a7fbabe-a7e2-469c-b6aa-22973dd510b3" (UID: "6a7fbabe-a7e2-469c-b6aa-22973dd510b3"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.750792 4930 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.750877 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-config-data" (OuterVolumeSpecName: "config-data") pod "6a7fbabe-a7e2-469c-b6aa-22973dd510b3" (UID: "6a7fbabe-a7e2-469c-b6aa-22973dd510b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.755947 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6a7fbabe-a7e2-469c-b6aa-22973dd510b3" (UID: "6a7fbabe-a7e2-469c-b6aa-22973dd510b3"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.755981 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-kube-api-access-n2wg7" (OuterVolumeSpecName: "kube-api-access-n2wg7") pod "6a7fbabe-a7e2-469c-b6aa-22973dd510b3" (UID: "6a7fbabe-a7e2-469c-b6aa-22973dd510b3"). InnerVolumeSpecName "kube-api-access-n2wg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.758832 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6a7fbabe-a7e2-469c-b6aa-22973dd510b3" (UID: "6a7fbabe-a7e2-469c-b6aa-22973dd510b3"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.778009 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6a7fbabe-a7e2-469c-b6aa-22973dd510b3" (UID: "6a7fbabe-a7e2-469c-b6aa-22973dd510b3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.780671 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6a7fbabe-a7e2-469c-b6aa-22973dd510b3" (UID: "6a7fbabe-a7e2-469c-b6aa-22973dd510b3"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.795665 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6a7fbabe-a7e2-469c-b6aa-22973dd510b3" (UID: "6a7fbabe-a7e2-469c-b6aa-22973dd510b3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.815028 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6a7fbabe-a7e2-469c-b6aa-22973dd510b3" (UID: "6a7fbabe-a7e2-469c-b6aa-22973dd510b3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.852526 4930 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.852609 4930 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.852622 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.852631 4930 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.852639 4930 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.852649 4930 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.852657 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2wg7\" (UniqueName: \"kubernetes.io/projected/6a7fbabe-a7e2-469c-b6aa-22973dd510b3-kube-api-access-n2wg7\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.852687 4930 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.876409 4930 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 24 12:55:20 crc kubenswrapper[4930]: I1124 12:55:20.954031 4930 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.162011 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8xslz"] Nov 24 12:55:21 crc kubenswrapper[4930]: E1124 12:55:21.162614 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7fbabe-a7e2-469c-b6aa-22973dd510b3" containerName="tempest-tests-tempest-tests-runner" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.162713 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7fbabe-a7e2-469c-b6aa-22973dd510b3" containerName="tempest-tests-tempest-tests-runner" Nov 24 12:55:21 crc kubenswrapper[4930]: E1124 12:55:21.162792 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerName="registry-server" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.162846 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerName="registry-server" Nov 24 12:55:21 crc kubenswrapper[4930]: E1124 12:55:21.162919 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerName="extract-content" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.162973 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerName="extract-content" Nov 24 12:55:21 crc kubenswrapper[4930]: E1124 12:55:21.163035 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerName="extract-utilities" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.163091 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerName="extract-utilities" Nov 24 12:55:21 crc kubenswrapper[4930]: E1124 12:55:21.163173 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerName="extract-content" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.163245 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerName="extract-content" Nov 24 12:55:21 crc kubenswrapper[4930]: E1124 12:55:21.163308 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerName="extract-utilities" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.163357 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerName="extract-utilities" Nov 24 12:55:21 crc kubenswrapper[4930]: E1124 12:55:21.163428 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerName="registry-server" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.163621 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerName="registry-server" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.163896 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="102b7539-783a-45a5-9ab5-ed67af4e677f" containerName="registry-server" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.163978 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0cb00d3-7146-4cb9-918b-a7abecb2ccda" containerName="registry-server" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.164045 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7fbabe-a7e2-469c-b6aa-22973dd510b3" containerName="tempest-tests-tempest-tests-runner" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.165571 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.174109 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xslz"] Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.197833 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6a7fbabe-a7e2-469c-b6aa-22973dd510b3","Type":"ContainerDied","Data":"08ce9626187542031b663ee08261a3c561feab81796989f8f794a6536b412e2f"} Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.197883 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08ce9626187542031b663ee08261a3c561feab81796989f8f794a6536b412e2f" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.197957 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.258786 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq8l2\" (UniqueName: \"kubernetes.io/projected/d0e0747e-b96a-49db-b635-b15e51b6342e-kube-api-access-xq8l2\") pod \"community-operators-8xslz\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.258868 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-catalog-content\") pod \"community-operators-8xslz\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.258920 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-utilities\") pod \"community-operators-8xslz\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.360572 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-utilities\") pod \"community-operators-8xslz\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.360728 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq8l2\" (UniqueName: \"kubernetes.io/projected/d0e0747e-b96a-49db-b635-b15e51b6342e-kube-api-access-xq8l2\") pod \"community-operators-8xslz\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.360770 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-catalog-content\") pod \"community-operators-8xslz\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.361076 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-utilities\") pod \"community-operators-8xslz\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.361101 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-catalog-content\") pod \"community-operators-8xslz\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.380165 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq8l2\" (UniqueName: \"kubernetes.io/projected/d0e0747e-b96a-49db-b635-b15e51b6342e-kube-api-access-xq8l2\") pod \"community-operators-8xslz\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:21 crc kubenswrapper[4930]: I1124 12:55:21.497465 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:22 crc kubenswrapper[4930]: I1124 12:55:22.004905 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xslz"] Nov 24 12:55:22 crc kubenswrapper[4930]: W1124 12:55:22.010364 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0e0747e_b96a_49db_b635_b15e51b6342e.slice/crio-b4d4423a5942afb0a72b48dd860022a46a8ec519bfe142c88017b9b26fde94e0 WatchSource:0}: Error finding container b4d4423a5942afb0a72b48dd860022a46a8ec519bfe142c88017b9b26fde94e0: Status 404 returned error can't find the container with id b4d4423a5942afb0a72b48dd860022a46a8ec519bfe142c88017b9b26fde94e0 Nov 24 12:55:22 crc kubenswrapper[4930]: I1124 12:55:22.207681 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xslz" event={"ID":"d0e0747e-b96a-49db-b635-b15e51b6342e","Type":"ContainerStarted","Data":"4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d"} Nov 24 12:55:22 crc kubenswrapper[4930]: I1124 12:55:22.208182 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xslz" event={"ID":"d0e0747e-b96a-49db-b635-b15e51b6342e","Type":"ContainerStarted","Data":"b4d4423a5942afb0a72b48dd860022a46a8ec519bfe142c88017b9b26fde94e0"} Nov 24 12:55:23 crc kubenswrapper[4930]: I1124 12:55:23.224249 4930 generic.go:334] "Generic (PLEG): container finished" podID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerID="4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d" exitCode=0 Nov 24 12:55:23 crc kubenswrapper[4930]: I1124 12:55:23.224304 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xslz" event={"ID":"d0e0747e-b96a-49db-b635-b15e51b6342e","Type":"ContainerDied","Data":"4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d"} Nov 24 12:55:24 crc kubenswrapper[4930]: I1124 12:55:24.233818 4930 generic.go:334] "Generic (PLEG): container finished" podID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerID="b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e" exitCode=0 Nov 24 12:55:24 crc kubenswrapper[4930]: I1124 12:55:24.233930 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xslz" event={"ID":"d0e0747e-b96a-49db-b635-b15e51b6342e","Type":"ContainerDied","Data":"b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e"} Nov 24 12:55:25 crc kubenswrapper[4930]: I1124 12:55:25.249590 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xslz" event={"ID":"d0e0747e-b96a-49db-b635-b15e51b6342e","Type":"ContainerStarted","Data":"4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53"} Nov 24 12:55:25 crc kubenswrapper[4930]: I1124 12:55:25.277586 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8xslz" podStartSLOduration=2.811780601 podStartE2EDuration="4.277563289s" podCreationTimestamp="2025-11-24 12:55:21 +0000 UTC" firstStartedPulling="2025-11-24 12:55:23.226648542 +0000 UTC m=+3369.840976512" lastFinishedPulling="2025-11-24 12:55:24.69243125 +0000 UTC m=+3371.306759200" observedRunningTime="2025-11-24 12:55:25.270193967 +0000 UTC m=+3371.884521937" watchObservedRunningTime="2025-11-24 12:55:25.277563289 +0000 UTC m=+3371.891891239" Nov 24 12:55:27 crc kubenswrapper[4930]: I1124 12:55:27.998157 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.002936 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.006576 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-cs6lk" Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.019870 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.093771 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv5j2\" (UniqueName: \"kubernetes.io/projected/7ecdd72c-294a-43fa-bd7a-edf2e10447fd-kube-api-access-rv5j2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7ecdd72c-294a-43fa-bd7a-edf2e10447fd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.094199 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7ecdd72c-294a-43fa-bd7a-edf2e10447fd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.197579 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7ecdd72c-294a-43fa-bd7a-edf2e10447fd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.198339 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv5j2\" (UniqueName: \"kubernetes.io/projected/7ecdd72c-294a-43fa-bd7a-edf2e10447fd-kube-api-access-rv5j2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7ecdd72c-294a-43fa-bd7a-edf2e10447fd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.202750 4930 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7ecdd72c-294a-43fa-bd7a-edf2e10447fd\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.226079 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv5j2\" (UniqueName: \"kubernetes.io/projected/7ecdd72c-294a-43fa-bd7a-edf2e10447fd-kube-api-access-rv5j2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7ecdd72c-294a-43fa-bd7a-edf2e10447fd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.242689 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7ecdd72c-294a-43fa-bd7a-edf2e10447fd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.339621 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 12:55:28 crc kubenswrapper[4930]: I1124 12:55:28.834675 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 12:55:28 crc kubenswrapper[4930]: W1124 12:55:28.837145 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ecdd72c_294a_43fa_bd7a_edf2e10447fd.slice/crio-a13eac174ba26f41b11332d22d20c0aa22331e049450a6f8c37630de9e27edab WatchSource:0}: Error finding container a13eac174ba26f41b11332d22d20c0aa22331e049450a6f8c37630de9e27edab: Status 404 returned error can't find the container with id a13eac174ba26f41b11332d22d20c0aa22331e049450a6f8c37630de9e27edab Nov 24 12:55:29 crc kubenswrapper[4930]: I1124 12:55:29.285681 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7ecdd72c-294a-43fa-bd7a-edf2e10447fd","Type":"ContainerStarted","Data":"a13eac174ba26f41b11332d22d20c0aa22331e049450a6f8c37630de9e27edab"} Nov 24 12:55:30 crc kubenswrapper[4930]: I1124 12:55:30.297465 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7ecdd72c-294a-43fa-bd7a-edf2e10447fd","Type":"ContainerStarted","Data":"afb4cf75356d45b57c5318f6fc46d0a41a3a15996f5e0268f776b4944ac3c888"} Nov 24 12:55:30 crc kubenswrapper[4930]: I1124 12:55:30.317919 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.5191144850000002 podStartE2EDuration="3.317900852s" podCreationTimestamp="2025-11-24 12:55:27 +0000 UTC" firstStartedPulling="2025-11-24 12:55:28.840270824 +0000 UTC m=+3375.454598774" lastFinishedPulling="2025-11-24 12:55:29.639057181 +0000 UTC m=+3376.253385141" observedRunningTime="2025-11-24 12:55:30.312904398 +0000 UTC m=+3376.927232358" watchObservedRunningTime="2025-11-24 12:55:30.317900852 +0000 UTC m=+3376.932228802" Nov 24 12:55:31 crc kubenswrapper[4930]: I1124 12:55:31.498269 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:31 crc kubenswrapper[4930]: I1124 12:55:31.498742 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:31 crc kubenswrapper[4930]: I1124 12:55:31.543293 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:32 crc kubenswrapper[4930]: I1124 12:55:32.370394 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:32 crc kubenswrapper[4930]: I1124 12:55:32.415432 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xslz"] Nov 24 12:55:34 crc kubenswrapper[4930]: I1124 12:55:34.337974 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8xslz" podUID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerName="registry-server" containerID="cri-o://4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53" gracePeriod=2 Nov 24 12:55:34 crc kubenswrapper[4930]: I1124 12:55:34.816363 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:34 crc kubenswrapper[4930]: I1124 12:55:34.927011 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-catalog-content\") pod \"d0e0747e-b96a-49db-b635-b15e51b6342e\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " Nov 24 12:55:34 crc kubenswrapper[4930]: I1124 12:55:34.927140 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-utilities\") pod \"d0e0747e-b96a-49db-b635-b15e51b6342e\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " Nov 24 12:55:34 crc kubenswrapper[4930]: I1124 12:55:34.927186 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq8l2\" (UniqueName: \"kubernetes.io/projected/d0e0747e-b96a-49db-b635-b15e51b6342e-kube-api-access-xq8l2\") pod \"d0e0747e-b96a-49db-b635-b15e51b6342e\" (UID: \"d0e0747e-b96a-49db-b635-b15e51b6342e\") " Nov 24 12:55:34 crc kubenswrapper[4930]: I1124 12:55:34.928449 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-utilities" (OuterVolumeSpecName: "utilities") pod "d0e0747e-b96a-49db-b635-b15e51b6342e" (UID: "d0e0747e-b96a-49db-b635-b15e51b6342e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:55:34 crc kubenswrapper[4930]: I1124 12:55:34.934674 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e0747e-b96a-49db-b635-b15e51b6342e-kube-api-access-xq8l2" (OuterVolumeSpecName: "kube-api-access-xq8l2") pod "d0e0747e-b96a-49db-b635-b15e51b6342e" (UID: "d0e0747e-b96a-49db-b635-b15e51b6342e"). InnerVolumeSpecName "kube-api-access-xq8l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.029226 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.029259 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq8l2\" (UniqueName: \"kubernetes.io/projected/d0e0747e-b96a-49db-b635-b15e51b6342e-kube-api-access-xq8l2\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.084122 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:55:35 crc kubenswrapper[4930]: E1124 12:55:35.084465 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.108522 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0e0747e-b96a-49db-b635-b15e51b6342e" (UID: "d0e0747e-b96a-49db-b635-b15e51b6342e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.130915 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e0747e-b96a-49db-b635-b15e51b6342e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.349090 4930 generic.go:334] "Generic (PLEG): container finished" podID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerID="4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53" exitCode=0 Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.349132 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xslz" event={"ID":"d0e0747e-b96a-49db-b635-b15e51b6342e","Type":"ContainerDied","Data":"4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53"} Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.349159 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xslz" event={"ID":"d0e0747e-b96a-49db-b635-b15e51b6342e","Type":"ContainerDied","Data":"b4d4423a5942afb0a72b48dd860022a46a8ec519bfe142c88017b9b26fde94e0"} Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.349177 4930 scope.go:117] "RemoveContainer" containerID="4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.349220 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xslz" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.384623 4930 scope.go:117] "RemoveContainer" containerID="b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.391234 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xslz"] Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.400290 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8xslz"] Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.432874 4930 scope.go:117] "RemoveContainer" containerID="4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.460230 4930 scope.go:117] "RemoveContainer" containerID="4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53" Nov 24 12:55:35 crc kubenswrapper[4930]: E1124 12:55:35.460710 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53\": container with ID starting with 4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53 not found: ID does not exist" containerID="4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.460757 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53"} err="failed to get container status \"4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53\": rpc error: code = NotFound desc = could not find container \"4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53\": container with ID starting with 4ef9d0da5d1255ec4a8e5cce10a43fcc9d6e86542e44e0c73df9eddfdb2b1a53 not found: ID does not exist" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.460789 4930 scope.go:117] "RemoveContainer" containerID="b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e" Nov 24 12:55:35 crc kubenswrapper[4930]: E1124 12:55:35.461135 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e\": container with ID starting with b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e not found: ID does not exist" containerID="b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.461167 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e"} err="failed to get container status \"b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e\": rpc error: code = NotFound desc = could not find container \"b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e\": container with ID starting with b6eedf451ec43a4c718d2b1de88158f63d1e6107ee2009abfc1c5c6a7c027c8e not found: ID does not exist" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.461188 4930 scope.go:117] "RemoveContainer" containerID="4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d" Nov 24 12:55:35 crc kubenswrapper[4930]: E1124 12:55:35.461409 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d\": container with ID starting with 4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d not found: ID does not exist" containerID="4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d" Nov 24 12:55:35 crc kubenswrapper[4930]: I1124 12:55:35.461516 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d"} err="failed to get container status \"4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d\": rpc error: code = NotFound desc = could not find container \"4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d\": container with ID starting with 4157d3939236ca48f61e1f2cab940ac689bfa942a7c91f71197ba6504644f18d not found: ID does not exist" Nov 24 12:55:36 crc kubenswrapper[4930]: I1124 12:55:36.097353 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e0747e-b96a-49db-b635-b15e51b6342e" path="/var/lib/kubelet/pods/d0e0747e-b96a-49db-b635-b15e51b6342e/volumes" Nov 24 12:55:50 crc kubenswrapper[4930]: I1124 12:55:50.085090 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:55:50 crc kubenswrapper[4930]: E1124 12:55:50.085715 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:55:50 crc kubenswrapper[4930]: I1124 12:55:50.962213 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-49g4x"] Nov 24 12:55:50 crc kubenswrapper[4930]: E1124 12:55:50.967224 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerName="extract-content" Nov 24 12:55:50 crc kubenswrapper[4930]: I1124 12:55:50.967269 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerName="extract-content" Nov 24 12:55:50 crc kubenswrapper[4930]: E1124 12:55:50.967322 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerName="extract-utilities" Nov 24 12:55:50 crc kubenswrapper[4930]: I1124 12:55:50.967330 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerName="extract-utilities" Nov 24 12:55:50 crc kubenswrapper[4930]: E1124 12:55:50.967337 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerName="registry-server" Nov 24 12:55:50 crc kubenswrapper[4930]: I1124 12:55:50.967346 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerName="registry-server" Nov 24 12:55:50 crc kubenswrapper[4930]: I1124 12:55:50.967586 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e0747e-b96a-49db-b635-b15e51b6342e" containerName="registry-server" Nov 24 12:55:50 crc kubenswrapper[4930]: I1124 12:55:50.969291 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:50 crc kubenswrapper[4930]: I1124 12:55:50.979096 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-49g4x"] Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.038578 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-catalog-content\") pod \"redhat-marketplace-49g4x\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.038674 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xrdw\" (UniqueName: \"kubernetes.io/projected/7b8ec77e-fbb1-489d-942a-4dc17a894677-kube-api-access-4xrdw\") pod \"redhat-marketplace-49g4x\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.038714 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-utilities\") pod \"redhat-marketplace-49g4x\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.140442 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-catalog-content\") pod \"redhat-marketplace-49g4x\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.140774 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xrdw\" (UniqueName: \"kubernetes.io/projected/7b8ec77e-fbb1-489d-942a-4dc17a894677-kube-api-access-4xrdw\") pod \"redhat-marketplace-49g4x\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.140849 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-utilities\") pod \"redhat-marketplace-49g4x\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.141065 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-catalog-content\") pod \"redhat-marketplace-49g4x\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.142405 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-utilities\") pod \"redhat-marketplace-49g4x\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.179530 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xrdw\" (UniqueName: \"kubernetes.io/projected/7b8ec77e-fbb1-489d-942a-4dc17a894677-kube-api-access-4xrdw\") pod \"redhat-marketplace-49g4x\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.301667 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:55:51 crc kubenswrapper[4930]: I1124 12:55:51.747435 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-49g4x"] Nov 24 12:55:52 crc kubenswrapper[4930]: I1124 12:55:52.558514 4930 generic.go:334] "Generic (PLEG): container finished" podID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerID="38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45" exitCode=0 Nov 24 12:55:52 crc kubenswrapper[4930]: I1124 12:55:52.558585 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49g4x" event={"ID":"7b8ec77e-fbb1-489d-942a-4dc17a894677","Type":"ContainerDied","Data":"38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45"} Nov 24 12:55:52 crc kubenswrapper[4930]: I1124 12:55:52.558907 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49g4x" event={"ID":"7b8ec77e-fbb1-489d-942a-4dc17a894677","Type":"ContainerStarted","Data":"5614d88708f8f59bfd16c73730965f12e450cd4543f0fac731b72e32ac9d996e"} Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.001987 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9jcmb/must-gather-d9vw9"] Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.004237 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/must-gather-d9vw9" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.009228 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9jcmb"/"default-dockercfg-ts9pl" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.009280 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9jcmb"/"openshift-service-ca.crt" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.009921 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9jcmb"/"kube-root-ca.crt" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.013572 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9jcmb/must-gather-d9vw9"] Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.074257 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9s6\" (UniqueName: \"kubernetes.io/projected/0babc740-20f9-4f89-95e9-b6e710be5633-kube-api-access-9z9s6\") pod \"must-gather-d9vw9\" (UID: \"0babc740-20f9-4f89-95e9-b6e710be5633\") " pod="openshift-must-gather-9jcmb/must-gather-d9vw9" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.074370 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0babc740-20f9-4f89-95e9-b6e710be5633-must-gather-output\") pod \"must-gather-d9vw9\" (UID: \"0babc740-20f9-4f89-95e9-b6e710be5633\") " pod="openshift-must-gather-9jcmb/must-gather-d9vw9" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.176000 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9s6\" (UniqueName: \"kubernetes.io/projected/0babc740-20f9-4f89-95e9-b6e710be5633-kube-api-access-9z9s6\") pod \"must-gather-d9vw9\" (UID: \"0babc740-20f9-4f89-95e9-b6e710be5633\") " pod="openshift-must-gather-9jcmb/must-gather-d9vw9" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.176198 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0babc740-20f9-4f89-95e9-b6e710be5633-must-gather-output\") pod \"must-gather-d9vw9\" (UID: \"0babc740-20f9-4f89-95e9-b6e710be5633\") " pod="openshift-must-gather-9jcmb/must-gather-d9vw9" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.176822 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0babc740-20f9-4f89-95e9-b6e710be5633-must-gather-output\") pod \"must-gather-d9vw9\" (UID: \"0babc740-20f9-4f89-95e9-b6e710be5633\") " pod="openshift-must-gather-9jcmb/must-gather-d9vw9" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.201476 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9s6\" (UniqueName: \"kubernetes.io/projected/0babc740-20f9-4f89-95e9-b6e710be5633-kube-api-access-9z9s6\") pod \"must-gather-d9vw9\" (UID: \"0babc740-20f9-4f89-95e9-b6e710be5633\") " pod="openshift-must-gather-9jcmb/must-gather-d9vw9" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.324834 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/must-gather-d9vw9" Nov 24 12:55:53 crc kubenswrapper[4930]: I1124 12:55:53.812652 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9jcmb/must-gather-d9vw9"] Nov 24 12:55:54 crc kubenswrapper[4930]: I1124 12:55:54.604314 4930 generic.go:334] "Generic (PLEG): container finished" podID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerID="3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184" exitCode=0 Nov 24 12:55:54 crc kubenswrapper[4930]: I1124 12:55:54.604451 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49g4x" event={"ID":"7b8ec77e-fbb1-489d-942a-4dc17a894677","Type":"ContainerDied","Data":"3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184"} Nov 24 12:55:54 crc kubenswrapper[4930]: I1124 12:55:54.607923 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/must-gather-d9vw9" event={"ID":"0babc740-20f9-4f89-95e9-b6e710be5633","Type":"ContainerStarted","Data":"1dad09829ece1df7dbc82bcd5fa57ebb3e9aa8bfcba3ed163c661dead26e55ee"} Nov 24 12:55:55 crc kubenswrapper[4930]: I1124 12:55:55.623830 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49g4x" event={"ID":"7b8ec77e-fbb1-489d-942a-4dc17a894677","Type":"ContainerStarted","Data":"1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0"} Nov 24 12:55:55 crc kubenswrapper[4930]: I1124 12:55:55.642455 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-49g4x" podStartSLOduration=3.155216856 podStartE2EDuration="5.642436297s" podCreationTimestamp="2025-11-24 12:55:50 +0000 UTC" firstStartedPulling="2025-11-24 12:55:52.560258115 +0000 UTC m=+3399.174586065" lastFinishedPulling="2025-11-24 12:55:55.047477556 +0000 UTC m=+3401.661805506" observedRunningTime="2025-11-24 12:55:55.640232243 +0000 UTC m=+3402.254560183" watchObservedRunningTime="2025-11-24 12:55:55.642436297 +0000 UTC m=+3402.256764247" Nov 24 12:55:58 crc kubenswrapper[4930]: I1124 12:55:58.651358 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/must-gather-d9vw9" event={"ID":"0babc740-20f9-4f89-95e9-b6e710be5633","Type":"ContainerStarted","Data":"460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca"} Nov 24 12:55:58 crc kubenswrapper[4930]: I1124 12:55:58.652311 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/must-gather-d9vw9" event={"ID":"0babc740-20f9-4f89-95e9-b6e710be5633","Type":"ContainerStarted","Data":"b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98"} Nov 24 12:55:58 crc kubenswrapper[4930]: I1124 12:55:58.666074 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9jcmb/must-gather-d9vw9" podStartSLOduration=2.359080121 podStartE2EDuration="6.666054627s" podCreationTimestamp="2025-11-24 12:55:52 +0000 UTC" firstStartedPulling="2025-11-24 12:55:53.816947067 +0000 UTC m=+3400.431275017" lastFinishedPulling="2025-11-24 12:55:58.123921573 +0000 UTC m=+3404.738249523" observedRunningTime="2025-11-24 12:55:58.663963977 +0000 UTC m=+3405.278291937" watchObservedRunningTime="2025-11-24 12:55:58.666054627 +0000 UTC m=+3405.280382577" Nov 24 12:56:01 crc kubenswrapper[4930]: I1124 12:56:01.302932 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:56:01 crc kubenswrapper[4930]: I1124 12:56:01.304129 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:56:01 crc kubenswrapper[4930]: I1124 12:56:01.364517 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:56:01 crc kubenswrapper[4930]: I1124 12:56:01.735867 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:56:01 crc kubenswrapper[4930]: I1124 12:56:01.795160 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-49g4x"] Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.085233 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:56:02 crc kubenswrapper[4930]: E1124 12:56:02.085535 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.168956 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9jcmb/crc-debug-cw2dd"] Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.170261 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.266917 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb4gf\" (UniqueName: \"kubernetes.io/projected/1fdd139d-1f79-4827-b768-983c39235004-kube-api-access-jb4gf\") pod \"crc-debug-cw2dd\" (UID: \"1fdd139d-1f79-4827-b768-983c39235004\") " pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.267135 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fdd139d-1f79-4827-b768-983c39235004-host\") pod \"crc-debug-cw2dd\" (UID: \"1fdd139d-1f79-4827-b768-983c39235004\") " pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.368791 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fdd139d-1f79-4827-b768-983c39235004-host\") pod \"crc-debug-cw2dd\" (UID: \"1fdd139d-1f79-4827-b768-983c39235004\") " pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.368972 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb4gf\" (UniqueName: \"kubernetes.io/projected/1fdd139d-1f79-4827-b768-983c39235004-kube-api-access-jb4gf\") pod \"crc-debug-cw2dd\" (UID: \"1fdd139d-1f79-4827-b768-983c39235004\") " pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.369046 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fdd139d-1f79-4827-b768-983c39235004-host\") pod \"crc-debug-cw2dd\" (UID: \"1fdd139d-1f79-4827-b768-983c39235004\") " pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.394370 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb4gf\" (UniqueName: \"kubernetes.io/projected/1fdd139d-1f79-4827-b768-983c39235004-kube-api-access-jb4gf\") pod \"crc-debug-cw2dd\" (UID: \"1fdd139d-1f79-4827-b768-983c39235004\") " pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.489342 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" Nov 24 12:56:02 crc kubenswrapper[4930]: W1124 12:56:02.536293 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fdd139d_1f79_4827_b768_983c39235004.slice/crio-8f9cfc5ad616d1b4b0ba1d8757fb0c4067f1be9a545b0dd335d2483777343359 WatchSource:0}: Error finding container 8f9cfc5ad616d1b4b0ba1d8757fb0c4067f1be9a545b0dd335d2483777343359: Status 404 returned error can't find the container with id 8f9cfc5ad616d1b4b0ba1d8757fb0c4067f1be9a545b0dd335d2483777343359 Nov 24 12:56:02 crc kubenswrapper[4930]: I1124 12:56:02.690039 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" event={"ID":"1fdd139d-1f79-4827-b768-983c39235004","Type":"ContainerStarted","Data":"8f9cfc5ad616d1b4b0ba1d8757fb0c4067f1be9a545b0dd335d2483777343359"} Nov 24 12:56:03 crc kubenswrapper[4930]: I1124 12:56:03.700283 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-49g4x" podUID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerName="registry-server" containerID="cri-o://1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0" gracePeriod=2 Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.245481 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.345526 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-utilities\") pod \"7b8ec77e-fbb1-489d-942a-4dc17a894677\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.345727 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-catalog-content\") pod \"7b8ec77e-fbb1-489d-942a-4dc17a894677\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.345836 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xrdw\" (UniqueName: \"kubernetes.io/projected/7b8ec77e-fbb1-489d-942a-4dc17a894677-kube-api-access-4xrdw\") pod \"7b8ec77e-fbb1-489d-942a-4dc17a894677\" (UID: \"7b8ec77e-fbb1-489d-942a-4dc17a894677\") " Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.346413 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-utilities" (OuterVolumeSpecName: "utilities") pod "7b8ec77e-fbb1-489d-942a-4dc17a894677" (UID: "7b8ec77e-fbb1-489d-942a-4dc17a894677"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.346878 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.356477 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8ec77e-fbb1-489d-942a-4dc17a894677-kube-api-access-4xrdw" (OuterVolumeSpecName: "kube-api-access-4xrdw") pod "7b8ec77e-fbb1-489d-942a-4dc17a894677" (UID: "7b8ec77e-fbb1-489d-942a-4dc17a894677"). InnerVolumeSpecName "kube-api-access-4xrdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.379248 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b8ec77e-fbb1-489d-942a-4dc17a894677" (UID: "7b8ec77e-fbb1-489d-942a-4dc17a894677"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.450783 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8ec77e-fbb1-489d-942a-4dc17a894677-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.450822 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xrdw\" (UniqueName: \"kubernetes.io/projected/7b8ec77e-fbb1-489d-942a-4dc17a894677-kube-api-access-4xrdw\") on node \"crc\" DevicePath \"\"" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.714847 4930 generic.go:334] "Generic (PLEG): container finished" podID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerID="1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0" exitCode=0 Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.715063 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49g4x" event={"ID":"7b8ec77e-fbb1-489d-942a-4dc17a894677","Type":"ContainerDied","Data":"1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0"} Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.715257 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49g4x" event={"ID":"7b8ec77e-fbb1-489d-942a-4dc17a894677","Type":"ContainerDied","Data":"5614d88708f8f59bfd16c73730965f12e450cd4543f0fac731b72e32ac9d996e"} Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.715278 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49g4x" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.715285 4930 scope.go:117] "RemoveContainer" containerID="1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.747418 4930 scope.go:117] "RemoveContainer" containerID="3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.760531 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-49g4x"] Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.769058 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-49g4x"] Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.793004 4930 scope.go:117] "RemoveContainer" containerID="38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.827335 4930 scope.go:117] "RemoveContainer" containerID="1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0" Nov 24 12:56:04 crc kubenswrapper[4930]: E1124 12:56:04.828153 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0\": container with ID starting with 1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0 not found: ID does not exist" containerID="1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.828207 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0"} err="failed to get container status \"1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0\": rpc error: code = NotFound desc = could not find container \"1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0\": container with ID starting with 1259ded101a66a99a39b22ac57fdafe16d4268ccd98fb2edc76e69a240af0bc0 not found: ID does not exist" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.828246 4930 scope.go:117] "RemoveContainer" containerID="3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184" Nov 24 12:56:04 crc kubenswrapper[4930]: E1124 12:56:04.829420 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184\": container with ID starting with 3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184 not found: ID does not exist" containerID="3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.829470 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184"} err="failed to get container status \"3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184\": rpc error: code = NotFound desc = could not find container \"3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184\": container with ID starting with 3eca61dd04d5fd27a569eb82217f4add64ca73efa09d05da285116df29833184 not found: ID does not exist" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.829505 4930 scope.go:117] "RemoveContainer" containerID="38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45" Nov 24 12:56:04 crc kubenswrapper[4930]: E1124 12:56:04.830145 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45\": container with ID starting with 38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45 not found: ID does not exist" containerID="38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45" Nov 24 12:56:04 crc kubenswrapper[4930]: I1124 12:56:04.830194 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45"} err="failed to get container status \"38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45\": rpc error: code = NotFound desc = could not find container \"38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45\": container with ID starting with 38e8ab1be16c6ba2a17c93d9e891705599d172bd1560c03367632428ecb8db45 not found: ID does not exist" Nov 24 12:56:06 crc kubenswrapper[4930]: I1124 12:56:06.135440 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b8ec77e-fbb1-489d-942a-4dc17a894677" path="/var/lib/kubelet/pods/7b8ec77e-fbb1-489d-942a-4dc17a894677/volumes" Nov 24 12:56:13 crc kubenswrapper[4930]: I1124 12:56:13.834070 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" event={"ID":"1fdd139d-1f79-4827-b768-983c39235004","Type":"ContainerStarted","Data":"b3e51d2ef917b9c77a02ccdf8bb4f4e6378e9417a5ff12d120f4dfe265451bfc"} Nov 24 12:56:13 crc kubenswrapper[4930]: I1124 12:56:13.853461 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" podStartSLOduration=0.87340162 podStartE2EDuration="11.853443583s" podCreationTimestamp="2025-11-24 12:56:02 +0000 UTC" firstStartedPulling="2025-11-24 12:56:02.539111387 +0000 UTC m=+3409.153439337" lastFinishedPulling="2025-11-24 12:56:13.51915335 +0000 UTC m=+3420.133481300" observedRunningTime="2025-11-24 12:56:13.851180548 +0000 UTC m=+3420.465508498" watchObservedRunningTime="2025-11-24 12:56:13.853443583 +0000 UTC m=+3420.467771523" Nov 24 12:56:16 crc kubenswrapper[4930]: I1124 12:56:16.087909 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:56:16 crc kubenswrapper[4930]: E1124 12:56:16.088798 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:56:28 crc kubenswrapper[4930]: I1124 12:56:28.084764 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:56:28 crc kubenswrapper[4930]: E1124 12:56:28.085696 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 12:56:42 crc kubenswrapper[4930]: I1124 12:56:42.085990 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 12:56:43 crc kubenswrapper[4930]: I1124 12:56:43.091243 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"8d7b65a5712740c01ce1afcfac553a05814266846ca2f298cc77ddec359b6809"} Nov 24 12:56:58 crc kubenswrapper[4930]: I1124 12:56:58.310652 4930 generic.go:334] "Generic (PLEG): container finished" podID="1fdd139d-1f79-4827-b768-983c39235004" containerID="b3e51d2ef917b9c77a02ccdf8bb4f4e6378e9417a5ff12d120f4dfe265451bfc" exitCode=0 Nov 24 12:56:58 crc kubenswrapper[4930]: I1124 12:56:58.310747 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" event={"ID":"1fdd139d-1f79-4827-b768-983c39235004","Type":"ContainerDied","Data":"b3e51d2ef917b9c77a02ccdf8bb4f4e6378e9417a5ff12d120f4dfe265451bfc"} Nov 24 12:56:59 crc kubenswrapper[4930]: I1124 12:56:59.415594 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" Nov 24 12:56:59 crc kubenswrapper[4930]: I1124 12:56:59.455197 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9jcmb/crc-debug-cw2dd"] Nov 24 12:56:59 crc kubenswrapper[4930]: I1124 12:56:59.464428 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9jcmb/crc-debug-cw2dd"] Nov 24 12:56:59 crc kubenswrapper[4930]: I1124 12:56:59.516789 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fdd139d-1f79-4827-b768-983c39235004-host\") pod \"1fdd139d-1f79-4827-b768-983c39235004\" (UID: \"1fdd139d-1f79-4827-b768-983c39235004\") " Nov 24 12:56:59 crc kubenswrapper[4930]: I1124 12:56:59.516989 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdd139d-1f79-4827-b768-983c39235004-host" (OuterVolumeSpecName: "host") pod "1fdd139d-1f79-4827-b768-983c39235004" (UID: "1fdd139d-1f79-4827-b768-983c39235004"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:56:59 crc kubenswrapper[4930]: I1124 12:56:59.516996 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb4gf\" (UniqueName: \"kubernetes.io/projected/1fdd139d-1f79-4827-b768-983c39235004-kube-api-access-jb4gf\") pod \"1fdd139d-1f79-4827-b768-983c39235004\" (UID: \"1fdd139d-1f79-4827-b768-983c39235004\") " Nov 24 12:56:59 crc kubenswrapper[4930]: I1124 12:56:59.517610 4930 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fdd139d-1f79-4827-b768-983c39235004-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:56:59 crc kubenswrapper[4930]: I1124 12:56:59.528788 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fdd139d-1f79-4827-b768-983c39235004-kube-api-access-jb4gf" (OuterVolumeSpecName: "kube-api-access-jb4gf") pod "1fdd139d-1f79-4827-b768-983c39235004" (UID: "1fdd139d-1f79-4827-b768-983c39235004"). InnerVolumeSpecName "kube-api-access-jb4gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:56:59 crc kubenswrapper[4930]: I1124 12:56:59.618948 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb4gf\" (UniqueName: \"kubernetes.io/projected/1fdd139d-1f79-4827-b768-983c39235004-kube-api-access-jb4gf\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.095991 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fdd139d-1f79-4827-b768-983c39235004" path="/var/lib/kubelet/pods/1fdd139d-1f79-4827-b768-983c39235004/volumes" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.330836 4930 scope.go:117] "RemoveContainer" containerID="b3e51d2ef917b9c77a02ccdf8bb4f4e6378e9417a5ff12d120f4dfe265451bfc" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.330890 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-cw2dd" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.639773 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9jcmb/crc-debug-qbkzf"] Nov 24 12:57:00 crc kubenswrapper[4930]: E1124 12:57:00.640441 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerName="extract-content" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.640466 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerName="extract-content" Nov 24 12:57:00 crc kubenswrapper[4930]: E1124 12:57:00.640522 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fdd139d-1f79-4827-b768-983c39235004" containerName="container-00" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.640534 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdd139d-1f79-4827-b768-983c39235004" containerName="container-00" Nov 24 12:57:00 crc kubenswrapper[4930]: E1124 12:57:00.640585 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerName="extract-utilities" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.640598 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerName="extract-utilities" Nov 24 12:57:00 crc kubenswrapper[4930]: E1124 12:57:00.640618 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerName="registry-server" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.640629 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerName="registry-server" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.641259 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fdd139d-1f79-4827-b768-983c39235004" containerName="container-00" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.641327 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8ec77e-fbb1-489d-942a-4dc17a894677" containerName="registry-server" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.642231 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.739403 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57b28d68-9ed7-471d-aff7-f2575d17d43b-host\") pod \"crc-debug-qbkzf\" (UID: \"57b28d68-9ed7-471d-aff7-f2575d17d43b\") " pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.739682 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5plp\" (UniqueName: \"kubernetes.io/projected/57b28d68-9ed7-471d-aff7-f2575d17d43b-kube-api-access-j5plp\") pod \"crc-debug-qbkzf\" (UID: \"57b28d68-9ed7-471d-aff7-f2575d17d43b\") " pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.841766 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57b28d68-9ed7-471d-aff7-f2575d17d43b-host\") pod \"crc-debug-qbkzf\" (UID: \"57b28d68-9ed7-471d-aff7-f2575d17d43b\") " pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.841846 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5plp\" (UniqueName: \"kubernetes.io/projected/57b28d68-9ed7-471d-aff7-f2575d17d43b-kube-api-access-j5plp\") pod \"crc-debug-qbkzf\" (UID: \"57b28d68-9ed7-471d-aff7-f2575d17d43b\") " pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.841976 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57b28d68-9ed7-471d-aff7-f2575d17d43b-host\") pod \"crc-debug-qbkzf\" (UID: \"57b28d68-9ed7-471d-aff7-f2575d17d43b\") " pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.863609 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5plp\" (UniqueName: \"kubernetes.io/projected/57b28d68-9ed7-471d-aff7-f2575d17d43b-kube-api-access-j5plp\") pod \"crc-debug-qbkzf\" (UID: \"57b28d68-9ed7-471d-aff7-f2575d17d43b\") " pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" Nov 24 12:57:00 crc kubenswrapper[4930]: I1124 12:57:00.964750 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" Nov 24 12:57:01 crc kubenswrapper[4930]: I1124 12:57:01.346086 4930 generic.go:334] "Generic (PLEG): container finished" podID="57b28d68-9ed7-471d-aff7-f2575d17d43b" containerID="9f29227a5d366b30d251f9f7e1367b7edcf9ed009be45d818918db7a525232ac" exitCode=0 Nov 24 12:57:01 crc kubenswrapper[4930]: I1124 12:57:01.346345 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" event={"ID":"57b28d68-9ed7-471d-aff7-f2575d17d43b","Type":"ContainerDied","Data":"9f29227a5d366b30d251f9f7e1367b7edcf9ed009be45d818918db7a525232ac"} Nov 24 12:57:01 crc kubenswrapper[4930]: I1124 12:57:01.346490 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" event={"ID":"57b28d68-9ed7-471d-aff7-f2575d17d43b","Type":"ContainerStarted","Data":"fd954641075a98814e30df4fcd233f7b49c1dbd810c46cc5293769bf712ef9fe"} Nov 24 12:57:01 crc kubenswrapper[4930]: I1124 12:57:01.816948 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9jcmb/crc-debug-qbkzf"] Nov 24 12:57:01 crc kubenswrapper[4930]: I1124 12:57:01.824881 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9jcmb/crc-debug-qbkzf"] Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.449194 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.569980 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5plp\" (UniqueName: \"kubernetes.io/projected/57b28d68-9ed7-471d-aff7-f2575d17d43b-kube-api-access-j5plp\") pod \"57b28d68-9ed7-471d-aff7-f2575d17d43b\" (UID: \"57b28d68-9ed7-471d-aff7-f2575d17d43b\") " Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.570243 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57b28d68-9ed7-471d-aff7-f2575d17d43b-host\") pod \"57b28d68-9ed7-471d-aff7-f2575d17d43b\" (UID: \"57b28d68-9ed7-471d-aff7-f2575d17d43b\") " Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.570293 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b28d68-9ed7-471d-aff7-f2575d17d43b-host" (OuterVolumeSpecName: "host") pod "57b28d68-9ed7-471d-aff7-f2575d17d43b" (UID: "57b28d68-9ed7-471d-aff7-f2575d17d43b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.570680 4930 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57b28d68-9ed7-471d-aff7-f2575d17d43b-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.575993 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b28d68-9ed7-471d-aff7-f2575d17d43b-kube-api-access-j5plp" (OuterVolumeSpecName: "kube-api-access-j5plp") pod "57b28d68-9ed7-471d-aff7-f2575d17d43b" (UID: "57b28d68-9ed7-471d-aff7-f2575d17d43b"). InnerVolumeSpecName "kube-api-access-j5plp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.672456 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5plp\" (UniqueName: \"kubernetes.io/projected/57b28d68-9ed7-471d-aff7-f2575d17d43b-kube-api-access-j5plp\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.991681 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9jcmb/crc-debug-k8rbh"] Nov 24 12:57:02 crc kubenswrapper[4930]: E1124 12:57:02.992182 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b28d68-9ed7-471d-aff7-f2575d17d43b" containerName="container-00" Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.992197 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b28d68-9ed7-471d-aff7-f2575d17d43b" containerName="container-00" Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.992429 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b28d68-9ed7-471d-aff7-f2575d17d43b" containerName="container-00" Nov 24 12:57:02 crc kubenswrapper[4930]: I1124 12:57:02.993179 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" Nov 24 12:57:03 crc kubenswrapper[4930]: I1124 12:57:03.080987 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td956\" (UniqueName: \"kubernetes.io/projected/dd5c90aa-b97b-4839-8829-a94d86fddc8e-kube-api-access-td956\") pod \"crc-debug-k8rbh\" (UID: \"dd5c90aa-b97b-4839-8829-a94d86fddc8e\") " pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" Nov 24 12:57:03 crc kubenswrapper[4930]: I1124 12:57:03.081402 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5c90aa-b97b-4839-8829-a94d86fddc8e-host\") pod \"crc-debug-k8rbh\" (UID: \"dd5c90aa-b97b-4839-8829-a94d86fddc8e\") " pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" Nov 24 12:57:03 crc kubenswrapper[4930]: I1124 12:57:03.183405 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5c90aa-b97b-4839-8829-a94d86fddc8e-host\") pod \"crc-debug-k8rbh\" (UID: \"dd5c90aa-b97b-4839-8829-a94d86fddc8e\") " pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" Nov 24 12:57:03 crc kubenswrapper[4930]: I1124 12:57:03.183790 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td956\" (UniqueName: \"kubernetes.io/projected/dd5c90aa-b97b-4839-8829-a94d86fddc8e-kube-api-access-td956\") pod \"crc-debug-k8rbh\" (UID: \"dd5c90aa-b97b-4839-8829-a94d86fddc8e\") " pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" Nov 24 12:57:03 crc kubenswrapper[4930]: I1124 12:57:03.183553 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5c90aa-b97b-4839-8829-a94d86fddc8e-host\") pod \"crc-debug-k8rbh\" (UID: \"dd5c90aa-b97b-4839-8829-a94d86fddc8e\") " pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" Nov 24 12:57:03 crc kubenswrapper[4930]: I1124 12:57:03.205277 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td956\" (UniqueName: \"kubernetes.io/projected/dd5c90aa-b97b-4839-8829-a94d86fddc8e-kube-api-access-td956\") pod \"crc-debug-k8rbh\" (UID: \"dd5c90aa-b97b-4839-8829-a94d86fddc8e\") " pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" Nov 24 12:57:03 crc kubenswrapper[4930]: I1124 12:57:03.311682 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" Nov 24 12:57:03 crc kubenswrapper[4930]: W1124 12:57:03.345814 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd5c90aa_b97b_4839_8829_a94d86fddc8e.slice/crio-9a062fdc88ddec443f4549801d49035e848705e05c83d5f60b10035f7c12177e WatchSource:0}: Error finding container 9a062fdc88ddec443f4549801d49035e848705e05c83d5f60b10035f7c12177e: Status 404 returned error can't find the container with id 9a062fdc88ddec443f4549801d49035e848705e05c83d5f60b10035f7c12177e Nov 24 12:57:03 crc kubenswrapper[4930]: I1124 12:57:03.370168 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" event={"ID":"dd5c90aa-b97b-4839-8829-a94d86fddc8e","Type":"ContainerStarted","Data":"9a062fdc88ddec443f4549801d49035e848705e05c83d5f60b10035f7c12177e"} Nov 24 12:57:03 crc kubenswrapper[4930]: I1124 12:57:03.372190 4930 scope.go:117] "RemoveContainer" containerID="9f29227a5d366b30d251f9f7e1367b7edcf9ed009be45d818918db7a525232ac" Nov 24 12:57:03 crc kubenswrapper[4930]: I1124 12:57:03.372339 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-qbkzf" Nov 24 12:57:04 crc kubenswrapper[4930]: I1124 12:57:04.094593 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b28d68-9ed7-471d-aff7-f2575d17d43b" path="/var/lib/kubelet/pods/57b28d68-9ed7-471d-aff7-f2575d17d43b/volumes" Nov 24 12:57:04 crc kubenswrapper[4930]: I1124 12:57:04.388971 4930 generic.go:334] "Generic (PLEG): container finished" podID="dd5c90aa-b97b-4839-8829-a94d86fddc8e" containerID="cb4a7d010c6c7e07d13c5c1aa88159899e36f9d118897accedb674a3bcdd74a5" exitCode=0 Nov 24 12:57:04 crc kubenswrapper[4930]: I1124 12:57:04.389043 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" event={"ID":"dd5c90aa-b97b-4839-8829-a94d86fddc8e","Type":"ContainerDied","Data":"cb4a7d010c6c7e07d13c5c1aa88159899e36f9d118897accedb674a3bcdd74a5"} Nov 24 12:57:04 crc kubenswrapper[4930]: I1124 12:57:04.431514 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9jcmb/crc-debug-k8rbh"] Nov 24 12:57:04 crc kubenswrapper[4930]: I1124 12:57:04.438795 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9jcmb/crc-debug-k8rbh"] Nov 24 12:57:05 crc kubenswrapper[4930]: I1124 12:57:05.510555 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" Nov 24 12:57:05 crc kubenswrapper[4930]: I1124 12:57:05.625238 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5c90aa-b97b-4839-8829-a94d86fddc8e-host\") pod \"dd5c90aa-b97b-4839-8829-a94d86fddc8e\" (UID: \"dd5c90aa-b97b-4839-8829-a94d86fddc8e\") " Nov 24 12:57:05 crc kubenswrapper[4930]: I1124 12:57:05.625347 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td956\" (UniqueName: \"kubernetes.io/projected/dd5c90aa-b97b-4839-8829-a94d86fddc8e-kube-api-access-td956\") pod \"dd5c90aa-b97b-4839-8829-a94d86fddc8e\" (UID: \"dd5c90aa-b97b-4839-8829-a94d86fddc8e\") " Nov 24 12:57:05 crc kubenswrapper[4930]: I1124 12:57:05.625376 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd5c90aa-b97b-4839-8829-a94d86fddc8e-host" (OuterVolumeSpecName: "host") pod "dd5c90aa-b97b-4839-8829-a94d86fddc8e" (UID: "dd5c90aa-b97b-4839-8829-a94d86fddc8e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:57:05 crc kubenswrapper[4930]: I1124 12:57:05.631336 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5c90aa-b97b-4839-8829-a94d86fddc8e-kube-api-access-td956" (OuterVolumeSpecName: "kube-api-access-td956") pod "dd5c90aa-b97b-4839-8829-a94d86fddc8e" (UID: "dd5c90aa-b97b-4839-8829-a94d86fddc8e"). InnerVolumeSpecName "kube-api-access-td956". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:57:05 crc kubenswrapper[4930]: I1124 12:57:05.728156 4930 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5c90aa-b97b-4839-8829-a94d86fddc8e-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:05 crc kubenswrapper[4930]: I1124 12:57:05.728203 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td956\" (UniqueName: \"kubernetes.io/projected/dd5c90aa-b97b-4839-8829-a94d86fddc8e-kube-api-access-td956\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:06 crc kubenswrapper[4930]: I1124 12:57:06.094496 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5c90aa-b97b-4839-8829-a94d86fddc8e" path="/var/lib/kubelet/pods/dd5c90aa-b97b-4839-8829-a94d86fddc8e/volumes" Nov 24 12:57:06 crc kubenswrapper[4930]: I1124 12:57:06.412390 4930 scope.go:117] "RemoveContainer" containerID="cb4a7d010c6c7e07d13c5c1aa88159899e36f9d118897accedb674a3bcdd74a5" Nov 24 12:57:06 crc kubenswrapper[4930]: I1124 12:57:06.412445 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/crc-debug-k8rbh" Nov 24 12:57:21 crc kubenswrapper[4930]: I1124 12:57:21.877022 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-84d7bcd766-9sdc2_513243cf-0c25-46b1-a535-906324dca4bb/barbican-api/0.log" Nov 24 12:57:21 crc kubenswrapper[4930]: I1124 12:57:21.926269 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-84d7bcd766-9sdc2_513243cf-0c25-46b1-a535-906324dca4bb/barbican-api-log/0.log" Nov 24 12:57:22 crc kubenswrapper[4930]: I1124 12:57:22.096802 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86bf5c4cf6-tbptj_b24c3d9b-ee6d-47ef-9391-91a395edbfbd/barbican-keystone-listener/0.log" Nov 24 12:57:22 crc kubenswrapper[4930]: I1124 12:57:22.144593 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86bf5c4cf6-tbptj_b24c3d9b-ee6d-47ef-9391-91a395edbfbd/barbican-keystone-listener-log/0.log" Nov 24 12:57:22 crc kubenswrapper[4930]: I1124 12:57:22.275036 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b86dc847c-csn2f_4a583517-6311-464a-b855-2a2d1e788461/barbican-worker/0.log" Nov 24 12:57:22 crc kubenswrapper[4930]: I1124 12:57:22.348764 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b86dc847c-csn2f_4a583517-6311-464a-b855-2a2d1e788461/barbican-worker-log/0.log" Nov 24 12:57:22 crc kubenswrapper[4930]: I1124 12:57:22.490850 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l_c3f7af8b-b5d0-4361-ada0-42f01955a7d5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:22 crc kubenswrapper[4930]: I1124 12:57:22.595946 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5163ee34-cf81-4983-a359-1224b73676fe/ceilometer-central-agent/0.log" Nov 24 12:57:22 crc kubenswrapper[4930]: I1124 12:57:22.665896 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5163ee34-cf81-4983-a359-1224b73676fe/ceilometer-notification-agent/0.log" Nov 24 12:57:22 crc kubenswrapper[4930]: I1124 12:57:22.748570 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5163ee34-cf81-4983-a359-1224b73676fe/proxy-httpd/0.log" Nov 24 12:57:22 crc kubenswrapper[4930]: I1124 12:57:22.784389 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5163ee34-cf81-4983-a359-1224b73676fe/sg-core/0.log" Nov 24 12:57:23 crc kubenswrapper[4930]: I1124 12:57:23.018688 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a527a579-00ed-4438-b675-70c5baefb0d9/cinder-api/0.log" Nov 24 12:57:23 crc kubenswrapper[4930]: I1124 12:57:23.067263 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a527a579-00ed-4438-b675-70c5baefb0d9/cinder-api-log/0.log" Nov 24 12:57:23 crc kubenswrapper[4930]: I1124 12:57:23.210909 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4f3ac20-aa87-48a4-9980-08b8ca2053ef/cinder-scheduler/0.log" Nov 24 12:57:23 crc kubenswrapper[4930]: I1124 12:57:23.254817 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4f3ac20-aa87-48a4-9980-08b8ca2053ef/probe/0.log" Nov 24 12:57:23 crc kubenswrapper[4930]: I1124 12:57:23.419916 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj_2e059ba1-d1de-4764-afd1-50b78af12ce8/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:23 crc kubenswrapper[4930]: I1124 12:57:23.587133 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj_7dab908a-df78-4c5a-945f-25221b75df7a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:23 crc kubenswrapper[4930]: I1124 12:57:23.652928 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64858ddbd7-fd6z9_9773394a-0a7d-40f6-a556-d3feb5acaf9d/init/0.log" Nov 24 12:57:23 crc kubenswrapper[4930]: I1124 12:57:23.920450 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64858ddbd7-fd6z9_9773394a-0a7d-40f6-a556-d3feb5acaf9d/init/0.log" Nov 24 12:57:23 crc kubenswrapper[4930]: I1124 12:57:23.968362 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt_94e8669b-69a8-41fb-ab05-d2e913495e16/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:24 crc kubenswrapper[4930]: I1124 12:57:24.000903 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64858ddbd7-fd6z9_9773394a-0a7d-40f6-a556-d3feb5acaf9d/dnsmasq-dns/0.log" Nov 24 12:57:24 crc kubenswrapper[4930]: I1124 12:57:24.191408 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_368b80c7-cc7d-4d6a-8b4d-90ea32596bf9/glance-log/0.log" Nov 24 12:57:24 crc kubenswrapper[4930]: I1124 12:57:24.219808 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_368b80c7-cc7d-4d6a-8b4d-90ea32596bf9/glance-httpd/0.log" Nov 24 12:57:24 crc kubenswrapper[4930]: I1124 12:57:24.656814 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3/glance-httpd/0.log" Nov 24 12:57:24 crc kubenswrapper[4930]: I1124 12:57:24.658305 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3/glance-log/0.log" Nov 24 12:57:24 crc kubenswrapper[4930]: I1124 12:57:24.882668 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b7594b454-4gfnw_8851e459-770d-4a08-8b35-41e3e060608b/horizon/0.log" Nov 24 12:57:25 crc kubenswrapper[4930]: I1124 12:57:25.019218 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w_dbe1f36a-7423-4635-bc7e-7ad5ba208b8b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:25 crc kubenswrapper[4930]: I1124 12:57:25.227479 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-xrq57_3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:25 crc kubenswrapper[4930]: I1124 12:57:25.304048 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b7594b454-4gfnw_8851e459-770d-4a08-8b35-41e3e060608b/horizon-log/0.log" Nov 24 12:57:25 crc kubenswrapper[4930]: I1124 12:57:25.607856 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_43eb3b2e-759d-46b8-885a-222b5d97e1c6/kube-state-metrics/0.log" Nov 24 12:57:25 crc kubenswrapper[4930]: I1124 12:57:25.641226 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7d65b7d547-xbx74_cddd20a0-4ab1-4747-86ec-3dbd6ae06f74/keystone-api/0.log" Nov 24 12:57:25 crc kubenswrapper[4930]: I1124 12:57:25.916093 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7_e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:26 crc kubenswrapper[4930]: I1124 12:57:26.245115 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-785757c67f-sl8rq_c3722de2-f333-4130-97bb-d2377fc9052f/neutron-httpd/0.log" Nov 24 12:57:26 crc kubenswrapper[4930]: I1124 12:57:26.320752 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-785757c67f-sl8rq_c3722de2-f333-4130-97bb-d2377fc9052f/neutron-api/0.log" Nov 24 12:57:26 crc kubenswrapper[4930]: I1124 12:57:26.367644 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc_2601017f-22e2-4b92-a224-ea216464d20a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:26 crc kubenswrapper[4930]: I1124 12:57:26.970217 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09/nova-cell0-conductor-conductor/0.log" Nov 24 12:57:27 crc kubenswrapper[4930]: I1124 12:57:27.021924 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ae96b7cf-94c8-4f24-bc63-3b0a529f09e5/nova-api-log/0.log" Nov 24 12:57:27 crc kubenswrapper[4930]: I1124 12:57:27.290218 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_cd764c7d-ba7d-4a99-8988-863d9cd6ad03/nova-cell1-conductor-conductor/0.log" Nov 24 12:57:27 crc kubenswrapper[4930]: I1124 12:57:27.348501 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ae96b7cf-94c8-4f24-bc63-3b0a529f09e5/nova-api-api/0.log" Nov 24 12:57:27 crc kubenswrapper[4930]: I1124 12:57:27.388261 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8d796659-c1c3-48aa-94eb-e16a14f8a0c8/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 12:57:27 crc kubenswrapper[4930]: I1124 12:57:27.617312 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-x7cw9_b5e86381-1bbe-4708-a86f-da5db51c1fb7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:27 crc kubenswrapper[4930]: I1124 12:57:27.756182 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5758b132-d70a-4597-87b7-f172d1e8560a/nova-metadata-log/0.log" Nov 24 12:57:28 crc kubenswrapper[4930]: I1124 12:57:28.052584 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7ec8562f-0cac-4105-9a8e-ba98bf34a944/nova-scheduler-scheduler/0.log" Nov 24 12:57:28 crc kubenswrapper[4930]: I1124 12:57:28.201625 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64612891-0a55-4622-8888-d141a949c665/mysql-bootstrap/0.log" Nov 24 12:57:28 crc kubenswrapper[4930]: I1124 12:57:28.399474 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64612891-0a55-4622-8888-d141a949c665/galera/0.log" Nov 24 12:57:28 crc kubenswrapper[4930]: I1124 12:57:28.486167 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64612891-0a55-4622-8888-d141a949c665/mysql-bootstrap/0.log" Nov 24 12:57:28 crc kubenswrapper[4930]: I1124 12:57:28.661291 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bddca103-daee-4f61-9165-1f6ec4762bd1/mysql-bootstrap/0.log" Nov 24 12:57:28 crc kubenswrapper[4930]: I1124 12:57:28.854579 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bddca103-daee-4f61-9165-1f6ec4762bd1/mysql-bootstrap/0.log" Nov 24 12:57:28 crc kubenswrapper[4930]: I1124 12:57:28.941914 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bddca103-daee-4f61-9165-1f6ec4762bd1/galera/0.log" Nov 24 12:57:29 crc kubenswrapper[4930]: I1124 12:57:29.002464 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5758b132-d70a-4597-87b7-f172d1e8560a/nova-metadata-metadata/0.log" Nov 24 12:57:29 crc kubenswrapper[4930]: I1124 12:57:29.106495 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1416edd0-b4e2-4acb-a449-1e9d40e9b2f5/openstackclient/0.log" Nov 24 12:57:29 crc kubenswrapper[4930]: I1124 12:57:29.211262 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fnxs8_b4686e3a-6cd1-4ada-a593-a7cfa2598257/openstack-network-exporter/0.log" Nov 24 12:57:29 crc kubenswrapper[4930]: I1124 12:57:29.379512 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5rmd_47adcfa9-c402-4f40-b558-bb2a56d93293/ovsdb-server-init/0.log" Nov 24 12:57:29 crc kubenswrapper[4930]: I1124 12:57:29.575065 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5rmd_47adcfa9-c402-4f40-b558-bb2a56d93293/ovs-vswitchd/0.log" Nov 24 12:57:29 crc kubenswrapper[4930]: I1124 12:57:29.607967 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5rmd_47adcfa9-c402-4f40-b558-bb2a56d93293/ovsdb-server/0.log" Nov 24 12:57:29 crc kubenswrapper[4930]: I1124 12:57:29.651075 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5rmd_47adcfa9-c402-4f40-b558-bb2a56d93293/ovsdb-server-init/0.log" Nov 24 12:57:29 crc kubenswrapper[4930]: I1124 12:57:29.847613 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-r7nwq_ce96cb2b-064b-4d76-a101-df9f31c86314/ovn-controller/0.log" Nov 24 12:57:29 crc kubenswrapper[4930]: I1124 12:57:29.903161 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-t562k_48d052f4-e44f-45e2-856a-08346f84f5b8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:30 crc kubenswrapper[4930]: I1124 12:57:30.103268 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1/openstack-network-exporter/0.log" Nov 24 12:57:30 crc kubenswrapper[4930]: I1124 12:57:30.114336 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1/ovn-northd/0.log" Nov 24 12:57:30 crc kubenswrapper[4930]: I1124 12:57:30.341590 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3118a4f6-bfb6-4646-a543-2f2dcbf03681/openstack-network-exporter/0.log" Nov 24 12:57:30 crc kubenswrapper[4930]: I1124 12:57:30.392624 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3118a4f6-bfb6-4646-a543-2f2dcbf03681/ovsdbserver-nb/0.log" Nov 24 12:57:30 crc kubenswrapper[4930]: I1124 12:57:30.520052 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_abae5d96-d4bd-42db-8517-ac6defbb22f2/openstack-network-exporter/0.log" Nov 24 12:57:30 crc kubenswrapper[4930]: I1124 12:57:30.598165 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_abae5d96-d4bd-42db-8517-ac6defbb22f2/ovsdbserver-sb/0.log" Nov 24 12:57:30 crc kubenswrapper[4930]: I1124 12:57:30.750460 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-784c754f4d-ttmj6_bb758c76-2ee4-4bac-8a07-d44205706854/placement-api/0.log" Nov 24 12:57:30 crc kubenswrapper[4930]: I1124 12:57:30.877421 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-784c754f4d-ttmj6_bb758c76-2ee4-4bac-8a07-d44205706854/placement-log/0.log" Nov 24 12:57:31 crc kubenswrapper[4930]: I1124 12:57:31.050486 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2247968a-aee9-4461-afd9-cfb36cc1f6fd/setup-container/0.log" Nov 24 12:57:31 crc kubenswrapper[4930]: I1124 12:57:31.211172 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2247968a-aee9-4461-afd9-cfb36cc1f6fd/rabbitmq/0.log" Nov 24 12:57:31 crc kubenswrapper[4930]: I1124 12:57:31.289087 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2247968a-aee9-4461-afd9-cfb36cc1f6fd/setup-container/0.log" Nov 24 12:57:31 crc kubenswrapper[4930]: I1124 12:57:31.363195 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5fe79a3-de03-466f-bf55-2d8c8259895a/setup-container/0.log" Nov 24 12:57:31 crc kubenswrapper[4930]: I1124 12:57:31.643630 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5fe79a3-de03-466f-bf55-2d8c8259895a/rabbitmq/0.log" Nov 24 12:57:31 crc kubenswrapper[4930]: I1124 12:57:31.644321 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-84f87_05ff1b01-0d59-4a45-9683-41ae2e8163bc/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:31 crc kubenswrapper[4930]: I1124 12:57:31.691885 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5fe79a3-de03-466f-bf55-2d8c8259895a/setup-container/0.log" Nov 24 12:57:31 crc kubenswrapper[4930]: I1124 12:57:31.914161 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-gcq7j_7b4b0309-31fd-407f-a03f-df928fd4675b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:31 crc kubenswrapper[4930]: I1124 12:57:31.962663 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s_29211cc5-c7d0-4aa9-9456-3313e20d2e1d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:32 crc kubenswrapper[4930]: I1124 12:57:32.167053 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gzmr4_2454068c-7c38-4a67-8830-63a6b0add307/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:32 crc kubenswrapper[4930]: I1124 12:57:32.202634 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gglqx_fea938c9-2678-4985-bbe3-8f15d9a3302b/ssh-known-hosts-edpm-deployment/0.log" Nov 24 12:57:32 crc kubenswrapper[4930]: I1124 12:57:32.575128 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f4c64f46c-fdhkr_7544a665-a649-46c1-b2e2-4f0179645890/proxy-httpd/0.log" Nov 24 12:57:32 crc kubenswrapper[4930]: I1124 12:57:32.585719 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f4c64f46c-fdhkr_7544a665-a649-46c1-b2e2-4f0179645890/proxy-server/0.log" Nov 24 12:57:32 crc kubenswrapper[4930]: I1124 12:57:32.734252 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2gmcp_066844af-3950-4700-84c4-3c1043ad05e7/swift-ring-rebalance/0.log" Nov 24 12:57:32 crc kubenswrapper[4930]: I1124 12:57:32.889933 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/account-auditor/0.log" Nov 24 12:57:32 crc kubenswrapper[4930]: I1124 12:57:32.910508 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/account-reaper/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.035528 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/account-replicator/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.101148 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/account-server/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.144188 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/container-auditor/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.236578 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/container-replicator/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.340225 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/container-server/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.367113 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/container-updater/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.433934 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/object-auditor/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.540650 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/object-expirer/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.621496 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/object-server/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.682878 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/object-replicator/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.748584 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/object-updater/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.813575 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/rsync/0.log" Nov 24 12:57:33 crc kubenswrapper[4930]: I1124 12:57:33.895837 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/swift-recon-cron/0.log" Nov 24 12:57:34 crc kubenswrapper[4930]: I1124 12:57:34.076728 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6_e5f020e4-dece-42e7-b327-99797d3b447f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:34 crc kubenswrapper[4930]: I1124 12:57:34.146192 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6a7fbabe-a7e2-469c-b6aa-22973dd510b3/tempest-tests-tempest-tests-runner/0.log" Nov 24 12:57:34 crc kubenswrapper[4930]: I1124 12:57:34.364258 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7ecdd72c-294a-43fa-bd7a-edf2e10447fd/test-operator-logs-container/0.log" Nov 24 12:57:34 crc kubenswrapper[4930]: I1124 12:57:34.519065 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv_4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:57:42 crc kubenswrapper[4930]: I1124 12:57:42.617080 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa/memcached/0.log" Nov 24 12:58:03 crc kubenswrapper[4930]: I1124 12:58:03.064505 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/util/0.log" Nov 24 12:58:03 crc kubenswrapper[4930]: I1124 12:58:03.264249 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/pull/0.log" Nov 24 12:58:03 crc kubenswrapper[4930]: I1124 12:58:03.267863 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/util/0.log" Nov 24 12:58:03 crc kubenswrapper[4930]: I1124 12:58:03.275758 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/pull/0.log" Nov 24 12:58:03 crc kubenswrapper[4930]: I1124 12:58:03.576442 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/util/0.log" Nov 24 12:58:03 crc kubenswrapper[4930]: I1124 12:58:03.577100 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/pull/0.log" Nov 24 12:58:03 crc kubenswrapper[4930]: I1124 12:58:03.591281 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/extract/0.log" Nov 24 12:58:03 crc kubenswrapper[4930]: I1124 12:58:03.769344 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-wqr7x_0752fe04-d0ea-4225-8e86-62c70618a5a1/kube-rbac-proxy/0.log" Nov 24 12:58:03 crc kubenswrapper[4930]: I1124 12:58:03.829656 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-wqr7x_0752fe04-d0ea-4225-8e86-62c70618a5a1/manager/0.log" Nov 24 12:58:03 crc kubenswrapper[4930]: I1124 12:58:03.876450 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-56s9w_cf778eca-e1fc-4619-9a85-aeda0fac014b/kube-rbac-proxy/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.039269 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-56s9w_cf778eca-e1fc-4619-9a85-aeda0fac014b/manager/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.076712 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-wn7d4_96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b/kube-rbac-proxy/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.096758 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-wn7d4_96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b/manager/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.273784 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-2jhpd_2115a6ba-c1ea-45f6-a340-7ccd67a77bbd/kube-rbac-proxy/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.409658 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-2jhpd_2115a6ba-c1ea-45f6-a340-7ccd67a77bbd/manager/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.486887 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-22kp5_525584f5-a41b-4189-986d-32f6c4e6bc16/kube-rbac-proxy/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.525212 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-22kp5_525584f5-a41b-4189-986d-32f6c4e6bc16/manager/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.670833 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-4svhq_5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e/kube-rbac-proxy/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.733009 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-4svhq_5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e/manager/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.843028 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-z7ftj_606a5459-832e-4986-a171-4fd89e3ee1ec/kube-rbac-proxy/0.log" Nov 24 12:58:04 crc kubenswrapper[4930]: I1124 12:58:04.987288 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-ngxgx_2652d83c-0fb2-41a7-a372-2f8e48ea33cc/kube-rbac-proxy/0.log" Nov 24 12:58:05 crc kubenswrapper[4930]: I1124 12:58:05.068761 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-ngxgx_2652d83c-0fb2-41a7-a372-2f8e48ea33cc/manager/0.log" Nov 24 12:58:05 crc kubenswrapper[4930]: I1124 12:58:05.071913 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-z7ftj_606a5459-832e-4986-a171-4fd89e3ee1ec/manager/0.log" Nov 24 12:58:05 crc kubenswrapper[4930]: I1124 12:58:05.264644 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-d9wbt_37344a1b-ea4d-4dcf-a803-3811a5626106/kube-rbac-proxy/0.log" Nov 24 12:58:05 crc kubenswrapper[4930]: I1124 12:58:05.388443 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-d9wbt_37344a1b-ea4d-4dcf-a803-3811a5626106/manager/0.log" Nov 24 12:58:05 crc kubenswrapper[4930]: I1124 12:58:05.406948 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-4ffrf_4b01f462-8bc8-4f01-ac0c-76452c353177/kube-rbac-proxy/0.log" Nov 24 12:58:05 crc kubenswrapper[4930]: I1124 12:58:05.562391 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-4ffrf_4b01f462-8bc8-4f01-ac0c-76452c353177/manager/0.log" Nov 24 12:58:05 crc kubenswrapper[4930]: I1124 12:58:05.645820 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-kdw5m_a60dc80f-2382-4901-a79e-1468759d9281/kube-rbac-proxy/0.log" Nov 24 12:58:05 crc kubenswrapper[4930]: I1124 12:58:05.652915 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-kdw5m_a60dc80f-2382-4901-a79e-1468759d9281/manager/0.log" Nov 24 12:58:05 crc kubenswrapper[4930]: I1124 12:58:05.926124 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-g2cfx_39e1c56a-84c3-4f33-a16d-77c62d65cd0f/kube-rbac-proxy/0.log" Nov 24 12:58:06 crc kubenswrapper[4930]: I1124 12:58:06.123798 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-2m7pb_8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8/kube-rbac-proxy/0.log" Nov 24 12:58:06 crc kubenswrapper[4930]: I1124 12:58:06.159477 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-g2cfx_39e1c56a-84c3-4f33-a16d-77c62d65cd0f/manager/0.log" Nov 24 12:58:06 crc kubenswrapper[4930]: I1124 12:58:06.239408 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-2m7pb_8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8/manager/0.log" Nov 24 12:58:06 crc kubenswrapper[4930]: I1124 12:58:06.431096 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-8kwlf_9e55dcae-85ee-412f-aa9b-3fc5a061d595/kube-rbac-proxy/0.log" Nov 24 12:58:06 crc kubenswrapper[4930]: I1124 12:58:06.463320 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-8kwlf_9e55dcae-85ee-412f-aa9b-3fc5a061d595/manager/0.log" Nov 24 12:58:06 crc kubenswrapper[4930]: I1124 12:58:06.634265 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq_f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9/kube-rbac-proxy/0.log" Nov 24 12:58:06 crc kubenswrapper[4930]: I1124 12:58:06.702492 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq_f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9/manager/0.log" Nov 24 12:58:06 crc kubenswrapper[4930]: I1124 12:58:06.762675 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cb9dc54f8-67b99_bd00a0b4-94c5-4ce5-b162-65c27e70c254/kube-rbac-proxy/0.log" Nov 24 12:58:06 crc kubenswrapper[4930]: I1124 12:58:06.905490 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8486c7f98b-v5s6l_a258ca7d-5a5d-477b-919c-e770ab7fa9cd/kube-rbac-proxy/0.log" Nov 24 12:58:07 crc kubenswrapper[4930]: I1124 12:58:07.206222 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8486c7f98b-v5s6l_a258ca7d-5a5d-477b-919c-e770ab7fa9cd/operator/0.log" Nov 24 12:58:07 crc kubenswrapper[4930]: I1124 12:58:07.240384 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5mhrz_df43ee8c-48c3-4014-a134-a3fddf9e8194/registry-server/0.log" Nov 24 12:58:07 crc kubenswrapper[4930]: I1124 12:58:07.262335 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-g62mm_25cf6a11-4150-4091-a6b8-d7510c5ca5ac/kube-rbac-proxy/0.log" Nov 24 12:58:07 crc kubenswrapper[4930]: I1124 12:58:07.501485 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-g62mm_25cf6a11-4150-4091-a6b8-d7510c5ca5ac/manager/0.log" Nov 24 12:58:07 crc kubenswrapper[4930]: I1124 12:58:07.507078 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-qvfs7_dbb47a0b-1e01-47b7-b57f-20e2e908674e/manager/0.log" Nov 24 12:58:07 crc kubenswrapper[4930]: I1124 12:58:07.551140 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-qvfs7_dbb47a0b-1e01-47b7-b57f-20e2e908674e/kube-rbac-proxy/0.log" Nov 24 12:58:07 crc kubenswrapper[4930]: I1124 12:58:07.771921 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6_83d079ef-a30c-458e-a350-c6f6d9a8985f/operator/0.log" Nov 24 12:58:07 crc kubenswrapper[4930]: I1124 12:58:07.799499 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-kzwpc_6de96fac-ce97-4bec-a2af-f50f839454ea/kube-rbac-proxy/0.log" Nov 24 12:58:07 crc kubenswrapper[4930]: I1124 12:58:07.974406 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-kzwpc_6de96fac-ce97-4bec-a2af-f50f839454ea/manager/0.log" Nov 24 12:58:08 crc kubenswrapper[4930]: I1124 12:58:08.034466 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-27cpb_f7031ec9-a046-4f1f-93e0-a6da41013d68/kube-rbac-proxy/0.log" Nov 24 12:58:08 crc kubenswrapper[4930]: I1124 12:58:08.149966 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-27cpb_f7031ec9-a046-4f1f-93e0-a6da41013d68/manager/0.log" Nov 24 12:58:08 crc kubenswrapper[4930]: I1124 12:58:08.266314 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cb9dc54f8-67b99_bd00a0b4-94c5-4ce5-b162-65c27e70c254/manager/0.log" Nov 24 12:58:08 crc kubenswrapper[4930]: I1124 12:58:08.292385 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-f2q9m_6db937f0-a6f1-44e0-87b8-cd4e2d645e24/kube-rbac-proxy/0.log" Nov 24 12:58:08 crc kubenswrapper[4930]: I1124 12:58:08.314631 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-f2q9m_6db937f0-a6f1-44e0-87b8-cd4e2d645e24/manager/0.log" Nov 24 12:58:08 crc kubenswrapper[4930]: I1124 12:58:08.516393 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-2zd5j_21e42885-6ebc-4b29-a2d1-32f64e257e11/kube-rbac-proxy/0.log" Nov 24 12:58:08 crc kubenswrapper[4930]: I1124 12:58:08.527741 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-2zd5j_21e42885-6ebc-4b29-a2d1-32f64e257e11/manager/0.log" Nov 24 12:58:24 crc kubenswrapper[4930]: I1124 12:58:24.511393 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xjpj4_080a5d44-2fa6-4e44-bd77-59047f85aea9/control-plane-machine-set-operator/0.log" Nov 24 12:58:24 crc kubenswrapper[4930]: I1124 12:58:24.700320 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kw8wv_28bc15a8-f8ed-4595-8a4f-e0d9e895c085/kube-rbac-proxy/0.log" Nov 24 12:58:24 crc kubenswrapper[4930]: I1124 12:58:24.760069 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kw8wv_28bc15a8-f8ed-4595-8a4f-e0d9e895c085/machine-api-operator/0.log" Nov 24 12:58:35 crc kubenswrapper[4930]: I1124 12:58:35.768795 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7rggt_475d077a-f4ed-4d11-9cc9-ec7b5dc365fe/cert-manager-controller/0.log" Nov 24 12:58:35 crc kubenswrapper[4930]: I1124 12:58:35.903412 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-6cpnr_cbbf065d-9533-4da3-80b7-0f20e160caf4/cert-manager-cainjector/0.log" Nov 24 12:58:36 crc kubenswrapper[4930]: I1124 12:58:36.003665 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-54lcm_baaa4d3f-5068-4824-a874-eb5e484bcf5b/cert-manager-webhook/0.log" Nov 24 12:58:48 crc kubenswrapper[4930]: I1124 12:58:48.719157 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-2mgrm_76de68fb-d44e-4e24-8843-18718d6763df/nmstate-console-plugin/0.log" Nov 24 12:58:48 crc kubenswrapper[4930]: I1124 12:58:48.922398 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-zgbjb_06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8/nmstate-handler/0.log" Nov 24 12:58:49 crc kubenswrapper[4930]: I1124 12:58:49.011465 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4tlrm_aa0b5808-c9b9-42b0-b585-1677b72ed1f3/kube-rbac-proxy/0.log" Nov 24 12:58:49 crc kubenswrapper[4930]: I1124 12:58:49.083310 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4tlrm_aa0b5808-c9b9-42b0-b585-1677b72ed1f3/nmstate-metrics/0.log" Nov 24 12:58:49 crc kubenswrapper[4930]: I1124 12:58:49.149293 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-vkg6v_326bae6a-98bd-4c7a-adfe-68f5680ac766/nmstate-operator/0.log" Nov 24 12:58:49 crc kubenswrapper[4930]: I1124 12:58:49.317611 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-569q4_fd169461-4da3-47da-b2b5-d7c796f9eec9/nmstate-webhook/0.log" Nov 24 12:59:01 crc kubenswrapper[4930]: I1124 12:59:01.809441 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:59:01 crc kubenswrapper[4930]: I1124 12:59:01.810099 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:59:05 crc kubenswrapper[4930]: I1124 12:59:05.659375 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-twjmq_86addadb-2b19-4ba8-b365-0d5d5dd326c5/kube-rbac-proxy/0.log" Nov 24 12:59:05 crc kubenswrapper[4930]: I1124 12:59:05.737653 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-twjmq_86addadb-2b19-4ba8-b365-0d5d5dd326c5/controller/0.log" Nov 24 12:59:05 crc kubenswrapper[4930]: I1124 12:59:05.862420 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-tdgdd_086e1816-851c-4997-b8f2-04563ff50e05/frr-k8s-webhook-server/0.log" Nov 24 12:59:05 crc kubenswrapper[4930]: I1124 12:59:05.993624 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-frr-files/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.141037 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-frr-files/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.148839 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-reloader/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.162650 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-metrics/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.197206 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-reloader/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.358389 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-frr-files/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.376615 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-reloader/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.402608 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-metrics/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.414528 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-metrics/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.557729 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-frr-files/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.570259 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-metrics/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.571619 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-reloader/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.616259 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/controller/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.770996 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/kube-rbac-proxy/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.781712 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/frr-metrics/0.log" Nov 24 12:59:06 crc kubenswrapper[4930]: I1124 12:59:06.831350 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/kube-rbac-proxy-frr/0.log" Nov 24 12:59:07 crc kubenswrapper[4930]: I1124 12:59:07.025453 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/reloader/0.log" Nov 24 12:59:07 crc kubenswrapper[4930]: I1124 12:59:07.049389 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6d8988b99d-fjfg4_37f079f2-d796-4fce-8fdb-030a0a663e1b/manager/0.log" Nov 24 12:59:07 crc kubenswrapper[4930]: I1124 12:59:07.345054 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-677786b954-pxf8r_5339f9f0-99ee-4ff8-90cc-8ab86611abc6/webhook-server/0.log" Nov 24 12:59:07 crc kubenswrapper[4930]: I1124 12:59:07.459312 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-t7cvk_cdda2566-3ca8-492b-a37f-18a8beccb6a6/kube-rbac-proxy/0.log" Nov 24 12:59:07 crc kubenswrapper[4930]: I1124 12:59:07.999525 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-t7cvk_cdda2566-3ca8-492b-a37f-18a8beccb6a6/speaker/0.log" Nov 24 12:59:08 crc kubenswrapper[4930]: I1124 12:59:08.128497 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/frr/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.069014 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/util/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.274580 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/pull/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.278337 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/util/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.316935 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/pull/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.471725 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/util/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.473565 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/pull/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.487315 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/extract/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.648777 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-utilities/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.808199 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-utilities/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.828032 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-content/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.832740 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-content/0.log" Nov 24 12:59:19 crc kubenswrapper[4930]: I1124 12:59:19.982160 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-utilities/0.log" Nov 24 12:59:20 crc kubenswrapper[4930]: I1124 12:59:20.036005 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-content/0.log" Nov 24 12:59:20 crc kubenswrapper[4930]: I1124 12:59:20.218726 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-utilities/0.log" Nov 24 12:59:20 crc kubenswrapper[4930]: I1124 12:59:20.442210 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-utilities/0.log" Nov 24 12:59:20 crc kubenswrapper[4930]: I1124 12:59:20.447482 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-content/0.log" Nov 24 12:59:20 crc kubenswrapper[4930]: I1124 12:59:20.519895 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-content/0.log" Nov 24 12:59:20 crc kubenswrapper[4930]: I1124 12:59:20.635612 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/registry-server/0.log" Nov 24 12:59:20 crc kubenswrapper[4930]: I1124 12:59:20.704857 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-content/0.log" Nov 24 12:59:20 crc kubenswrapper[4930]: I1124 12:59:20.708370 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-utilities/0.log" Nov 24 12:59:20 crc kubenswrapper[4930]: I1124 12:59:20.930845 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/registry-server/0.log" Nov 24 12:59:20 crc kubenswrapper[4930]: I1124 12:59:20.976876 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/util/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.092482 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/pull/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.105423 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/util/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.135659 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/pull/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.351433 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/util/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.351710 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/extract/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.356021 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/pull/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.514338 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-utilities/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.518103 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vn8d4_6adfccee-6f09-45b8-b8b9-4cd6fe524680/marketplace-operator/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.754331 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-utilities/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.763632 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-content/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.763710 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-content/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.939318 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-utilities/0.log" Nov 24 12:59:21 crc kubenswrapper[4930]: I1124 12:59:21.939474 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-content/0.log" Nov 24 12:59:22 crc kubenswrapper[4930]: I1124 12:59:22.087680 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/registry-server/0.log" Nov 24 12:59:22 crc kubenswrapper[4930]: I1124 12:59:22.143038 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7z7jl_f1fad967-63fa-4433-8aad-deb662733831/extract-utilities/0.log" Nov 24 12:59:22 crc kubenswrapper[4930]: I1124 12:59:22.306210 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7z7jl_f1fad967-63fa-4433-8aad-deb662733831/extract-utilities/0.log" Nov 24 12:59:22 crc kubenswrapper[4930]: I1124 12:59:22.314711 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7z7jl_f1fad967-63fa-4433-8aad-deb662733831/extract-content/0.log" Nov 24 12:59:22 crc kubenswrapper[4930]: I1124 12:59:22.319242 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7z7jl_f1fad967-63fa-4433-8aad-deb662733831/extract-content/0.log" Nov 24 12:59:22 crc kubenswrapper[4930]: I1124 12:59:22.482259 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7z7jl_f1fad967-63fa-4433-8aad-deb662733831/extract-content/0.log" Nov 24 12:59:22 crc kubenswrapper[4930]: I1124 12:59:22.512010 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7z7jl_f1fad967-63fa-4433-8aad-deb662733831/extract-utilities/0.log" Nov 24 12:59:22 crc kubenswrapper[4930]: I1124 12:59:22.944773 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7z7jl_f1fad967-63fa-4433-8aad-deb662733831/registry-server/0.log" Nov 24 12:59:31 crc kubenswrapper[4930]: I1124 12:59:31.809604 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:59:31 crc kubenswrapper[4930]: I1124 12:59:31.810216 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.200471 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr"] Nov 24 13:00:00 crc kubenswrapper[4930]: E1124 13:00:00.202037 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5c90aa-b97b-4839-8829-a94d86fddc8e" containerName="container-00" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.202066 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5c90aa-b97b-4839-8829-a94d86fddc8e" containerName="container-00" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.202376 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5c90aa-b97b-4839-8829-a94d86fddc8e" containerName="container-00" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.203330 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.215505 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.215806 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.219947 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr"] Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.330685 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99s9x\" (UniqueName: \"kubernetes.io/projected/e3b0fdcd-23aa-416b-a686-d06a1706ec18-kube-api-access-99s9x\") pod \"collect-profiles-29399820-8ltmr\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.330906 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3b0fdcd-23aa-416b-a686-d06a1706ec18-config-volume\") pod \"collect-profiles-29399820-8ltmr\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.331089 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3b0fdcd-23aa-416b-a686-d06a1706ec18-secret-volume\") pod \"collect-profiles-29399820-8ltmr\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.432694 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3b0fdcd-23aa-416b-a686-d06a1706ec18-config-volume\") pod \"collect-profiles-29399820-8ltmr\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.432779 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3b0fdcd-23aa-416b-a686-d06a1706ec18-secret-volume\") pod \"collect-profiles-29399820-8ltmr\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.432835 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99s9x\" (UniqueName: \"kubernetes.io/projected/e3b0fdcd-23aa-416b-a686-d06a1706ec18-kube-api-access-99s9x\") pod \"collect-profiles-29399820-8ltmr\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.434353 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3b0fdcd-23aa-416b-a686-d06a1706ec18-config-volume\") pod \"collect-profiles-29399820-8ltmr\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.450150 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3b0fdcd-23aa-416b-a686-d06a1706ec18-secret-volume\") pod \"collect-profiles-29399820-8ltmr\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.453103 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99s9x\" (UniqueName: \"kubernetes.io/projected/e3b0fdcd-23aa-416b-a686-d06a1706ec18-kube-api-access-99s9x\") pod \"collect-profiles-29399820-8ltmr\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:00 crc kubenswrapper[4930]: I1124 13:00:00.533938 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:01 crc kubenswrapper[4930]: I1124 13:00:01.009210 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr"] Nov 24 13:00:01 crc kubenswrapper[4930]: I1124 13:00:01.111500 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" event={"ID":"e3b0fdcd-23aa-416b-a686-d06a1706ec18","Type":"ContainerStarted","Data":"15c1d9cfd3a42caec391039a8eb8c095cdd17c29f02555df10482e474b403e3f"} Nov 24 13:00:01 crc kubenswrapper[4930]: I1124 13:00:01.809304 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:00:01 crc kubenswrapper[4930]: I1124 13:00:01.809365 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:00:01 crc kubenswrapper[4930]: I1124 13:00:01.809407 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 13:00:01 crc kubenswrapper[4930]: I1124 13:00:01.810146 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d7b65a5712740c01ce1afcfac553a05814266846ca2f298cc77ddec359b6809"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:00:01 crc kubenswrapper[4930]: I1124 13:00:01.810193 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://8d7b65a5712740c01ce1afcfac553a05814266846ca2f298cc77ddec359b6809" gracePeriod=600 Nov 24 13:00:02 crc kubenswrapper[4930]: I1124 13:00:02.147166 4930 generic.go:334] "Generic (PLEG): container finished" podID="e3b0fdcd-23aa-416b-a686-d06a1706ec18" containerID="475c6f62e60c963b3060cccea5988ff76873ac759bcd9a0966eeb3caac66192b" exitCode=0 Nov 24 13:00:02 crc kubenswrapper[4930]: I1124 13:00:02.147552 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" event={"ID":"e3b0fdcd-23aa-416b-a686-d06a1706ec18","Type":"ContainerDied","Data":"475c6f62e60c963b3060cccea5988ff76873ac759bcd9a0966eeb3caac66192b"} Nov 24 13:00:02 crc kubenswrapper[4930]: I1124 13:00:02.155439 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="8d7b65a5712740c01ce1afcfac553a05814266846ca2f298cc77ddec359b6809" exitCode=0 Nov 24 13:00:02 crc kubenswrapper[4930]: I1124 13:00:02.155489 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"8d7b65a5712740c01ce1afcfac553a05814266846ca2f298cc77ddec359b6809"} Nov 24 13:00:02 crc kubenswrapper[4930]: I1124 13:00:02.155529 4930 scope.go:117] "RemoveContainer" containerID="92aa49508cf8dc680376804c4c8ae7dabb02987a1685531672acad5a000b7505" Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.167495 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80"} Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.524484 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.595295 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3b0fdcd-23aa-416b-a686-d06a1706ec18-secret-volume\") pod \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.595388 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99s9x\" (UniqueName: \"kubernetes.io/projected/e3b0fdcd-23aa-416b-a686-d06a1706ec18-kube-api-access-99s9x\") pod \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.595523 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3b0fdcd-23aa-416b-a686-d06a1706ec18-config-volume\") pod \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\" (UID: \"e3b0fdcd-23aa-416b-a686-d06a1706ec18\") " Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.596496 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b0fdcd-23aa-416b-a686-d06a1706ec18-config-volume" (OuterVolumeSpecName: "config-volume") pod "e3b0fdcd-23aa-416b-a686-d06a1706ec18" (UID: "e3b0fdcd-23aa-416b-a686-d06a1706ec18"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.604839 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b0fdcd-23aa-416b-a686-d06a1706ec18-kube-api-access-99s9x" (OuterVolumeSpecName: "kube-api-access-99s9x") pod "e3b0fdcd-23aa-416b-a686-d06a1706ec18" (UID: "e3b0fdcd-23aa-416b-a686-d06a1706ec18"). InnerVolumeSpecName "kube-api-access-99s9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.607687 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b0fdcd-23aa-416b-a686-d06a1706ec18-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e3b0fdcd-23aa-416b-a686-d06a1706ec18" (UID: "e3b0fdcd-23aa-416b-a686-d06a1706ec18"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.697143 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99s9x\" (UniqueName: \"kubernetes.io/projected/e3b0fdcd-23aa-416b-a686-d06a1706ec18-kube-api-access-99s9x\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.697186 4930 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3b0fdcd-23aa-416b-a686-d06a1706ec18-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:03 crc kubenswrapper[4930]: I1124 13:00:03.697195 4930 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3b0fdcd-23aa-416b-a686-d06a1706ec18-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:04 crc kubenswrapper[4930]: I1124 13:00:04.179645 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" Nov 24 13:00:04 crc kubenswrapper[4930]: I1124 13:00:04.180699 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-8ltmr" event={"ID":"e3b0fdcd-23aa-416b-a686-d06a1706ec18","Type":"ContainerDied","Data":"15c1d9cfd3a42caec391039a8eb8c095cdd17c29f02555df10482e474b403e3f"} Nov 24 13:00:04 crc kubenswrapper[4930]: I1124 13:00:04.180780 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15c1d9cfd3a42caec391039a8eb8c095cdd17c29f02555df10482e474b403e3f" Nov 24 13:00:04 crc kubenswrapper[4930]: I1124 13:00:04.600295 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr"] Nov 24 13:00:04 crc kubenswrapper[4930]: I1124 13:00:04.609982 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-mz9mr"] Nov 24 13:00:06 crc kubenswrapper[4930]: I1124 13:00:06.096444 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3963e6bb-dfea-4a47-9765-0203d3b7ed65" path="/var/lib/kubelet/pods/3963e6bb-dfea-4a47-9765-0203d3b7ed65/volumes" Nov 24 13:00:24 crc kubenswrapper[4930]: I1124 13:00:24.945503 4930 scope.go:117] "RemoveContainer" containerID="84037103c3c11b749e0515510a49bf3342b1a61a07cb3d2d13e722c47ea6ad27" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.168903 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29399821-dsqq9"] Nov 24 13:01:00 crc kubenswrapper[4930]: E1124 13:01:00.171085 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b0fdcd-23aa-416b-a686-d06a1706ec18" containerName="collect-profiles" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.171109 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b0fdcd-23aa-416b-a686-d06a1706ec18" containerName="collect-profiles" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.171598 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b0fdcd-23aa-416b-a686-d06a1706ec18" containerName="collect-profiles" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.172664 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.199997 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399821-dsqq9"] Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.285480 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-combined-ca-bundle\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.285815 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-fernet-keys\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.285876 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mg7c\" (UniqueName: \"kubernetes.io/projected/94a0bff7-6443-46d2-8696-cdbbfde75f76-kube-api-access-4mg7c\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.286125 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-config-data\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.388422 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-config-data\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.388547 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-combined-ca-bundle\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.388597 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-fernet-keys\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.388620 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mg7c\" (UniqueName: \"kubernetes.io/projected/94a0bff7-6443-46d2-8696-cdbbfde75f76-kube-api-access-4mg7c\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.397391 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-fernet-keys\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.397497 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-combined-ca-bundle\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.398819 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-config-data\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.417859 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mg7c\" (UniqueName: \"kubernetes.io/projected/94a0bff7-6443-46d2-8696-cdbbfde75f76-kube-api-access-4mg7c\") pod \"keystone-cron-29399821-dsqq9\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:00 crc kubenswrapper[4930]: I1124 13:01:00.504519 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:01 crc kubenswrapper[4930]: I1124 13:01:01.031047 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399821-dsqq9"] Nov 24 13:01:01 crc kubenswrapper[4930]: I1124 13:01:01.761701 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-dsqq9" event={"ID":"94a0bff7-6443-46d2-8696-cdbbfde75f76","Type":"ContainerStarted","Data":"4408735c27d65669dcbba2da5f728a436f67f4fbf0bf25f5cae61259149f0d09"} Nov 24 13:01:01 crc kubenswrapper[4930]: I1124 13:01:01.762090 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-dsqq9" event={"ID":"94a0bff7-6443-46d2-8696-cdbbfde75f76","Type":"ContainerStarted","Data":"86eeae13dcf3db1205057b100f2f0e7474f98cef0bd400cb1a9225f076dadd29"} Nov 24 13:01:01 crc kubenswrapper[4930]: I1124 13:01:01.786809 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29399821-dsqq9" podStartSLOduration=1.7867868 podStartE2EDuration="1.7867868s" podCreationTimestamp="2025-11-24 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:01:01.776090313 +0000 UTC m=+3708.390418263" watchObservedRunningTime="2025-11-24 13:01:01.7867868 +0000 UTC m=+3708.401114750" Nov 24 13:01:03 crc kubenswrapper[4930]: I1124 13:01:03.784983 4930 generic.go:334] "Generic (PLEG): container finished" podID="94a0bff7-6443-46d2-8696-cdbbfde75f76" containerID="4408735c27d65669dcbba2da5f728a436f67f4fbf0bf25f5cae61259149f0d09" exitCode=0 Nov 24 13:01:03 crc kubenswrapper[4930]: I1124 13:01:03.785706 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-dsqq9" event={"ID":"94a0bff7-6443-46d2-8696-cdbbfde75f76","Type":"ContainerDied","Data":"4408735c27d65669dcbba2da5f728a436f67f4fbf0bf25f5cae61259149f0d09"} Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.163250 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.289266 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-combined-ca-bundle\") pod \"94a0bff7-6443-46d2-8696-cdbbfde75f76\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.289832 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-fernet-keys\") pod \"94a0bff7-6443-46d2-8696-cdbbfde75f76\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.290005 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mg7c\" (UniqueName: \"kubernetes.io/projected/94a0bff7-6443-46d2-8696-cdbbfde75f76-kube-api-access-4mg7c\") pod \"94a0bff7-6443-46d2-8696-cdbbfde75f76\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.290070 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-config-data\") pod \"94a0bff7-6443-46d2-8696-cdbbfde75f76\" (UID: \"94a0bff7-6443-46d2-8696-cdbbfde75f76\") " Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.305133 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a0bff7-6443-46d2-8696-cdbbfde75f76-kube-api-access-4mg7c" (OuterVolumeSpecName: "kube-api-access-4mg7c") pod "94a0bff7-6443-46d2-8696-cdbbfde75f76" (UID: "94a0bff7-6443-46d2-8696-cdbbfde75f76"). InnerVolumeSpecName "kube-api-access-4mg7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.305253 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "94a0bff7-6443-46d2-8696-cdbbfde75f76" (UID: "94a0bff7-6443-46d2-8696-cdbbfde75f76"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.325278 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94a0bff7-6443-46d2-8696-cdbbfde75f76" (UID: "94a0bff7-6443-46d2-8696-cdbbfde75f76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.351193 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-config-data" (OuterVolumeSpecName: "config-data") pod "94a0bff7-6443-46d2-8696-cdbbfde75f76" (UID: "94a0bff7-6443-46d2-8696-cdbbfde75f76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.393759 4930 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.393808 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mg7c\" (UniqueName: \"kubernetes.io/projected/94a0bff7-6443-46d2-8696-cdbbfde75f76-kube-api-access-4mg7c\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.393824 4930 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.393844 4930 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a0bff7-6443-46d2-8696-cdbbfde75f76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.807493 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-dsqq9" event={"ID":"94a0bff7-6443-46d2-8696-cdbbfde75f76","Type":"ContainerDied","Data":"86eeae13dcf3db1205057b100f2f0e7474f98cef0bd400cb1a9225f076dadd29"} Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.807533 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-dsqq9" Nov 24 13:01:05 crc kubenswrapper[4930]: I1124 13:01:05.807554 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86eeae13dcf3db1205057b100f2f0e7474f98cef0bd400cb1a9225f076dadd29" Nov 24 13:01:08 crc kubenswrapper[4930]: I1124 13:01:08.835258 4930 generic.go:334] "Generic (PLEG): container finished" podID="0babc740-20f9-4f89-95e9-b6e710be5633" containerID="b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98" exitCode=0 Nov 24 13:01:08 crc kubenswrapper[4930]: I1124 13:01:08.835710 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jcmb/must-gather-d9vw9" event={"ID":"0babc740-20f9-4f89-95e9-b6e710be5633","Type":"ContainerDied","Data":"b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98"} Nov 24 13:01:08 crc kubenswrapper[4930]: I1124 13:01:08.836652 4930 scope.go:117] "RemoveContainer" containerID="b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98" Nov 24 13:01:09 crc kubenswrapper[4930]: I1124 13:01:09.618033 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9jcmb_must-gather-d9vw9_0babc740-20f9-4f89-95e9-b6e710be5633/gather/0.log" Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.300224 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9jcmb/must-gather-d9vw9"] Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.301326 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9jcmb/must-gather-d9vw9" podUID="0babc740-20f9-4f89-95e9-b6e710be5633" containerName="copy" containerID="cri-o://460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca" gracePeriod=2 Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.324307 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9jcmb/must-gather-d9vw9"] Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.764328 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9jcmb_must-gather-d9vw9_0babc740-20f9-4f89-95e9-b6e710be5633/copy/0.log" Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.765039 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/must-gather-d9vw9" Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.873678 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z9s6\" (UniqueName: \"kubernetes.io/projected/0babc740-20f9-4f89-95e9-b6e710be5633-kube-api-access-9z9s6\") pod \"0babc740-20f9-4f89-95e9-b6e710be5633\" (UID: \"0babc740-20f9-4f89-95e9-b6e710be5633\") " Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.873980 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0babc740-20f9-4f89-95e9-b6e710be5633-must-gather-output\") pod \"0babc740-20f9-4f89-95e9-b6e710be5633\" (UID: \"0babc740-20f9-4f89-95e9-b6e710be5633\") " Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.887856 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0babc740-20f9-4f89-95e9-b6e710be5633-kube-api-access-9z9s6" (OuterVolumeSpecName: "kube-api-access-9z9s6") pod "0babc740-20f9-4f89-95e9-b6e710be5633" (UID: "0babc740-20f9-4f89-95e9-b6e710be5633"). InnerVolumeSpecName "kube-api-access-9z9s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.919234 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9jcmb_must-gather-d9vw9_0babc740-20f9-4f89-95e9-b6e710be5633/copy/0.log" Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.920112 4930 generic.go:334] "Generic (PLEG): container finished" podID="0babc740-20f9-4f89-95e9-b6e710be5633" containerID="460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca" exitCode=143 Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.920168 4930 scope.go:117] "RemoveContainer" containerID="460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca" Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.920199 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jcmb/must-gather-d9vw9" Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.950915 4930 scope.go:117] "RemoveContainer" containerID="b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98" Nov 24 13:01:17 crc kubenswrapper[4930]: I1124 13:01:17.976460 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z9s6\" (UniqueName: \"kubernetes.io/projected/0babc740-20f9-4f89-95e9-b6e710be5633-kube-api-access-9z9s6\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:18 crc kubenswrapper[4930]: I1124 13:01:18.013750 4930 scope.go:117] "RemoveContainer" containerID="460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca" Nov 24 13:01:18 crc kubenswrapper[4930]: E1124 13:01:18.014324 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca\": container with ID starting with 460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca not found: ID does not exist" containerID="460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca" Nov 24 13:01:18 crc kubenswrapper[4930]: I1124 13:01:18.014371 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca"} err="failed to get container status \"460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca\": rpc error: code = NotFound desc = could not find container \"460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca\": container with ID starting with 460a5109a284dd6d18f25770449113a124dee9041f084836ae1b8c307ba1acca not found: ID does not exist" Nov 24 13:01:18 crc kubenswrapper[4930]: I1124 13:01:18.014400 4930 scope.go:117] "RemoveContainer" containerID="b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98" Nov 24 13:01:18 crc kubenswrapper[4930]: E1124 13:01:18.015367 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98\": container with ID starting with b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98 not found: ID does not exist" containerID="b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98" Nov 24 13:01:18 crc kubenswrapper[4930]: I1124 13:01:18.015412 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98"} err="failed to get container status \"b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98\": rpc error: code = NotFound desc = could not find container \"b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98\": container with ID starting with b1a7cdf1214bf6f14f2c8a47086a66674f50f7255087a6dd13a19be0cae8cc98 not found: ID does not exist" Nov 24 13:01:18 crc kubenswrapper[4930]: I1124 13:01:18.028831 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0babc740-20f9-4f89-95e9-b6e710be5633-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0babc740-20f9-4f89-95e9-b6e710be5633" (UID: "0babc740-20f9-4f89-95e9-b6e710be5633"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:01:18 crc kubenswrapper[4930]: I1124 13:01:18.078402 4930 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0babc740-20f9-4f89-95e9-b6e710be5633-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:18 crc kubenswrapper[4930]: I1124 13:01:18.109257 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0babc740-20f9-4f89-95e9-b6e710be5633" path="/var/lib/kubelet/pods/0babc740-20f9-4f89-95e9-b6e710be5633/volumes" Nov 24 13:02:31 crc kubenswrapper[4930]: I1124 13:02:31.808793 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:02:31 crc kubenswrapper[4930]: I1124 13:02:31.809441 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.303778 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zchk8"] Nov 24 13:03:00 crc kubenswrapper[4930]: E1124 13:03:00.305061 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a0bff7-6443-46d2-8696-cdbbfde75f76" containerName="keystone-cron" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.305082 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a0bff7-6443-46d2-8696-cdbbfde75f76" containerName="keystone-cron" Nov 24 13:03:00 crc kubenswrapper[4930]: E1124 13:03:00.305110 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0babc740-20f9-4f89-95e9-b6e710be5633" containerName="copy" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.305118 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0babc740-20f9-4f89-95e9-b6e710be5633" containerName="copy" Nov 24 13:03:00 crc kubenswrapper[4930]: E1124 13:03:00.305149 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0babc740-20f9-4f89-95e9-b6e710be5633" containerName="gather" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.305159 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="0babc740-20f9-4f89-95e9-b6e710be5633" containerName="gather" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.305578 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a0bff7-6443-46d2-8696-cdbbfde75f76" containerName="keystone-cron" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.305671 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="0babc740-20f9-4f89-95e9-b6e710be5633" containerName="gather" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.305728 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="0babc740-20f9-4f89-95e9-b6e710be5633" containerName="copy" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.307587 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.328994 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zchk8"] Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.422867 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-utilities\") pod \"certified-operators-zchk8\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.423307 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-catalog-content\") pod \"certified-operators-zchk8\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.423486 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw6g7\" (UniqueName: \"kubernetes.io/projected/102030ab-a063-44ab-9052-a9b9bdde7d61-kube-api-access-sw6g7\") pod \"certified-operators-zchk8\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.525602 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-catalog-content\") pod \"certified-operators-zchk8\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.525707 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw6g7\" (UniqueName: \"kubernetes.io/projected/102030ab-a063-44ab-9052-a9b9bdde7d61-kube-api-access-sw6g7\") pod \"certified-operators-zchk8\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.525741 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-utilities\") pod \"certified-operators-zchk8\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.526334 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-utilities\") pod \"certified-operators-zchk8\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.526631 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-catalog-content\") pod \"certified-operators-zchk8\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.548503 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw6g7\" (UniqueName: \"kubernetes.io/projected/102030ab-a063-44ab-9052-a9b9bdde7d61-kube-api-access-sw6g7\") pod \"certified-operators-zchk8\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:00 crc kubenswrapper[4930]: I1124 13:03:00.637015 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:01 crc kubenswrapper[4930]: I1124 13:03:01.205593 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zchk8"] Nov 24 13:03:01 crc kubenswrapper[4930]: I1124 13:03:01.809265 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:03:01 crc kubenswrapper[4930]: I1124 13:03:01.809332 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:03:02 crc kubenswrapper[4930]: I1124 13:03:02.182326 4930 generic.go:334] "Generic (PLEG): container finished" podID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerID="4c6dd45bd4130c482b4fb3d0624676a0d56e764529205c59ff9f6c208b694a0c" exitCode=0 Nov 24 13:03:02 crc kubenswrapper[4930]: I1124 13:03:02.182428 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zchk8" event={"ID":"102030ab-a063-44ab-9052-a9b9bdde7d61","Type":"ContainerDied","Data":"4c6dd45bd4130c482b4fb3d0624676a0d56e764529205c59ff9f6c208b694a0c"} Nov 24 13:03:02 crc kubenswrapper[4930]: I1124 13:03:02.182810 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zchk8" event={"ID":"102030ab-a063-44ab-9052-a9b9bdde7d61","Type":"ContainerStarted","Data":"37c90509ada35ba04f1f013bb5720603972861cae45f1cf91cdbce190b7ed104"} Nov 24 13:03:02 crc kubenswrapper[4930]: I1124 13:03:02.184807 4930 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:03:03 crc kubenswrapper[4930]: I1124 13:03:03.192738 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zchk8" event={"ID":"102030ab-a063-44ab-9052-a9b9bdde7d61","Type":"ContainerStarted","Data":"a153b19e3f4135334c6b38f83341860bcb01074f87aeea978390574940bef051"} Nov 24 13:03:04 crc kubenswrapper[4930]: I1124 13:03:04.209957 4930 generic.go:334] "Generic (PLEG): container finished" podID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerID="a153b19e3f4135334c6b38f83341860bcb01074f87aeea978390574940bef051" exitCode=0 Nov 24 13:03:04 crc kubenswrapper[4930]: I1124 13:03:04.210293 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zchk8" event={"ID":"102030ab-a063-44ab-9052-a9b9bdde7d61","Type":"ContainerDied","Data":"a153b19e3f4135334c6b38f83341860bcb01074f87aeea978390574940bef051"} Nov 24 13:03:05 crc kubenswrapper[4930]: I1124 13:03:05.222440 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zchk8" event={"ID":"102030ab-a063-44ab-9052-a9b9bdde7d61","Type":"ContainerStarted","Data":"fb0b3ca4692135225fe575bb09255dd59b2535ba3b8e5c747079775cab9ef24f"} Nov 24 13:03:05 crc kubenswrapper[4930]: I1124 13:03:05.245963 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zchk8" podStartSLOduration=2.7917643930000002 podStartE2EDuration="5.245941353s" podCreationTimestamp="2025-11-24 13:03:00 +0000 UTC" firstStartedPulling="2025-11-24 13:03:02.184549194 +0000 UTC m=+3828.798877144" lastFinishedPulling="2025-11-24 13:03:04.638726154 +0000 UTC m=+3831.253054104" observedRunningTime="2025-11-24 13:03:05.239340014 +0000 UTC m=+3831.853667984" watchObservedRunningTime="2025-11-24 13:03:05.245941353 +0000 UTC m=+3831.860269313" Nov 24 13:03:10 crc kubenswrapper[4930]: I1124 13:03:10.637922 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:10 crc kubenswrapper[4930]: I1124 13:03:10.638294 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:10 crc kubenswrapper[4930]: I1124 13:03:10.686888 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:11 crc kubenswrapper[4930]: I1124 13:03:11.315720 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:11 crc kubenswrapper[4930]: I1124 13:03:11.362753 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zchk8"] Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.288126 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zchk8" podUID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerName="registry-server" containerID="cri-o://fb0b3ca4692135225fe575bb09255dd59b2535ba3b8e5c747079775cab9ef24f" gracePeriod=2 Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.608193 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9jt5w"] Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.613576 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.645737 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jt5w"] Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.816968 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29aca205-9637-47c8-9ab4-a5e1068f2c79-utilities\") pod \"redhat-operators-9jt5w\" (UID: \"29aca205-9637-47c8-9ab4-a5e1068f2c79\") " pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.817323 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29aca205-9637-47c8-9ab4-a5e1068f2c79-catalog-content\") pod \"redhat-operators-9jt5w\" (UID: \"29aca205-9637-47c8-9ab4-a5e1068f2c79\") " pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.817420 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwc5\" (UniqueName: \"kubernetes.io/projected/29aca205-9637-47c8-9ab4-a5e1068f2c79-kube-api-access-njwc5\") pod \"redhat-operators-9jt5w\" (UID: \"29aca205-9637-47c8-9ab4-a5e1068f2c79\") " pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.919802 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29aca205-9637-47c8-9ab4-a5e1068f2c79-catalog-content\") pod \"redhat-operators-9jt5w\" (UID: \"29aca205-9637-47c8-9ab4-a5e1068f2c79\") " pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.919860 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwc5\" (UniqueName: \"kubernetes.io/projected/29aca205-9637-47c8-9ab4-a5e1068f2c79-kube-api-access-njwc5\") pod \"redhat-operators-9jt5w\" (UID: \"29aca205-9637-47c8-9ab4-a5e1068f2c79\") " pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.920030 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29aca205-9637-47c8-9ab4-a5e1068f2c79-utilities\") pod \"redhat-operators-9jt5w\" (UID: \"29aca205-9637-47c8-9ab4-a5e1068f2c79\") " pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.920459 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29aca205-9637-47c8-9ab4-a5e1068f2c79-utilities\") pod \"redhat-operators-9jt5w\" (UID: \"29aca205-9637-47c8-9ab4-a5e1068f2c79\") " pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.920478 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29aca205-9637-47c8-9ab4-a5e1068f2c79-catalog-content\") pod \"redhat-operators-9jt5w\" (UID: \"29aca205-9637-47c8-9ab4-a5e1068f2c79\") " pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.951637 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwc5\" (UniqueName: \"kubernetes.io/projected/29aca205-9637-47c8-9ab4-a5e1068f2c79-kube-api-access-njwc5\") pod \"redhat-operators-9jt5w\" (UID: \"29aca205-9637-47c8-9ab4-a5e1068f2c79\") " pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:13 crc kubenswrapper[4930]: I1124 13:03:13.987667 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.323122 4930 generic.go:334] "Generic (PLEG): container finished" podID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerID="fb0b3ca4692135225fe575bb09255dd59b2535ba3b8e5c747079775cab9ef24f" exitCode=0 Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.323589 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zchk8" event={"ID":"102030ab-a063-44ab-9052-a9b9bdde7d61","Type":"ContainerDied","Data":"fb0b3ca4692135225fe575bb09255dd59b2535ba3b8e5c747079775cab9ef24f"} Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.323895 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zchk8" event={"ID":"102030ab-a063-44ab-9052-a9b9bdde7d61","Type":"ContainerDied","Data":"37c90509ada35ba04f1f013bb5720603972861cae45f1cf91cdbce190b7ed104"} Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.323914 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37c90509ada35ba04f1f013bb5720603972861cae45f1cf91cdbce190b7ed104" Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.355369 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.531362 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-catalog-content\") pod \"102030ab-a063-44ab-9052-a9b9bdde7d61\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.531569 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw6g7\" (UniqueName: \"kubernetes.io/projected/102030ab-a063-44ab-9052-a9b9bdde7d61-kube-api-access-sw6g7\") pod \"102030ab-a063-44ab-9052-a9b9bdde7d61\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.531657 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-utilities\") pod \"102030ab-a063-44ab-9052-a9b9bdde7d61\" (UID: \"102030ab-a063-44ab-9052-a9b9bdde7d61\") " Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.532859 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-utilities" (OuterVolumeSpecName: "utilities") pod "102030ab-a063-44ab-9052-a9b9bdde7d61" (UID: "102030ab-a063-44ab-9052-a9b9bdde7d61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.536777 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102030ab-a063-44ab-9052-a9b9bdde7d61-kube-api-access-sw6g7" (OuterVolumeSpecName: "kube-api-access-sw6g7") pod "102030ab-a063-44ab-9052-a9b9bdde7d61" (UID: "102030ab-a063-44ab-9052-a9b9bdde7d61"). InnerVolumeSpecName "kube-api-access-sw6g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.563077 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jt5w"] Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.598832 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "102030ab-a063-44ab-9052-a9b9bdde7d61" (UID: "102030ab-a063-44ab-9052-a9b9bdde7d61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.633713 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw6g7\" (UniqueName: \"kubernetes.io/projected/102030ab-a063-44ab-9052-a9b9bdde7d61-kube-api-access-sw6g7\") on node \"crc\" DevicePath \"\"" Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.633752 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:03:14 crc kubenswrapper[4930]: I1124 13:03:14.633764 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102030ab-a063-44ab-9052-a9b9bdde7d61-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:03:15 crc kubenswrapper[4930]: I1124 13:03:15.333096 4930 generic.go:334] "Generic (PLEG): container finished" podID="29aca205-9637-47c8-9ab4-a5e1068f2c79" containerID="ef7c53b8cd95d4c21f0d84bcedbc293a94a59ec9918098dc5c36d0319b61aa9c" exitCode=0 Nov 24 13:03:15 crc kubenswrapper[4930]: I1124 13:03:15.333195 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jt5w" event={"ID":"29aca205-9637-47c8-9ab4-a5e1068f2c79","Type":"ContainerDied","Data":"ef7c53b8cd95d4c21f0d84bcedbc293a94a59ec9918098dc5c36d0319b61aa9c"} Nov 24 13:03:15 crc kubenswrapper[4930]: I1124 13:03:15.333495 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jt5w" event={"ID":"29aca205-9637-47c8-9ab4-a5e1068f2c79","Type":"ContainerStarted","Data":"1acbeb8a6afa4b16f2b6c3ac31f1b9cb4a54bc95600727658d3869d78b17acd8"} Nov 24 13:03:15 crc kubenswrapper[4930]: I1124 13:03:15.333552 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zchk8" Nov 24 13:03:15 crc kubenswrapper[4930]: I1124 13:03:15.372292 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zchk8"] Nov 24 13:03:15 crc kubenswrapper[4930]: I1124 13:03:15.379712 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zchk8"] Nov 24 13:03:16 crc kubenswrapper[4930]: I1124 13:03:16.095340 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102030ab-a063-44ab-9052-a9b9bdde7d61" path="/var/lib/kubelet/pods/102030ab-a063-44ab-9052-a9b9bdde7d61/volumes" Nov 24 13:03:23 crc kubenswrapper[4930]: I1124 13:03:23.442154 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jt5w" event={"ID":"29aca205-9637-47c8-9ab4-a5e1068f2c79","Type":"ContainerStarted","Data":"02ef1e5808721317f2d348dde5597ae63bf44f2b4bda965d2440dce19f4517b0"} Nov 24 13:03:25 crc kubenswrapper[4930]: I1124 13:03:25.477476 4930 generic.go:334] "Generic (PLEG): container finished" podID="29aca205-9637-47c8-9ab4-a5e1068f2c79" containerID="02ef1e5808721317f2d348dde5597ae63bf44f2b4bda965d2440dce19f4517b0" exitCode=0 Nov 24 13:03:25 crc kubenswrapper[4930]: I1124 13:03:25.477671 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jt5w" event={"ID":"29aca205-9637-47c8-9ab4-a5e1068f2c79","Type":"ContainerDied","Data":"02ef1e5808721317f2d348dde5597ae63bf44f2b4bda965d2440dce19f4517b0"} Nov 24 13:03:26 crc kubenswrapper[4930]: I1124 13:03:26.489422 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jt5w" event={"ID":"29aca205-9637-47c8-9ab4-a5e1068f2c79","Type":"ContainerStarted","Data":"6859cda0148f0c328edac70451a72541ea2b77eca35fa0fde57e306b5b00fc02"} Nov 24 13:03:26 crc kubenswrapper[4930]: I1124 13:03:26.521226 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9jt5w" podStartSLOduration=2.990888194 podStartE2EDuration="13.521206596s" podCreationTimestamp="2025-11-24 13:03:13 +0000 UTC" firstStartedPulling="2025-11-24 13:03:15.334588674 +0000 UTC m=+3841.948916614" lastFinishedPulling="2025-11-24 13:03:25.864907056 +0000 UTC m=+3852.479235016" observedRunningTime="2025-11-24 13:03:26.517398406 +0000 UTC m=+3853.131726356" watchObservedRunningTime="2025-11-24 13:03:26.521206596 +0000 UTC m=+3853.135534546" Nov 24 13:03:31 crc kubenswrapper[4930]: I1124 13:03:31.808816 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:03:31 crc kubenswrapper[4930]: I1124 13:03:31.809667 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:03:31 crc kubenswrapper[4930]: I1124 13:03:31.809723 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 13:03:31 crc kubenswrapper[4930]: I1124 13:03:31.810354 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:03:31 crc kubenswrapper[4930]: I1124 13:03:31.810422 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" gracePeriod=600 Nov 24 13:03:32 crc kubenswrapper[4930]: E1124 13:03:32.472628 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:03:32 crc kubenswrapper[4930]: I1124 13:03:32.545073 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" exitCode=0 Nov 24 13:03:32 crc kubenswrapper[4930]: I1124 13:03:32.545117 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80"} Nov 24 13:03:32 crc kubenswrapper[4930]: I1124 13:03:32.545153 4930 scope.go:117] "RemoveContainer" containerID="8d7b65a5712740c01ce1afcfac553a05814266846ca2f298cc77ddec359b6809" Nov 24 13:03:32 crc kubenswrapper[4930]: I1124 13:03:32.545785 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:03:32 crc kubenswrapper[4930]: E1124 13:03:32.546067 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:03:33 crc kubenswrapper[4930]: I1124 13:03:33.988359 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:33 crc kubenswrapper[4930]: I1124 13:03:33.988821 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:34 crc kubenswrapper[4930]: I1124 13:03:34.041424 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:34 crc kubenswrapper[4930]: I1124 13:03:34.608531 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9jt5w" Nov 24 13:03:34 crc kubenswrapper[4930]: I1124 13:03:34.675920 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jt5w"] Nov 24 13:03:34 crc kubenswrapper[4930]: I1124 13:03:34.707766 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7z7jl"] Nov 24 13:03:34 crc kubenswrapper[4930]: I1124 13:03:34.708050 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7z7jl" podUID="f1fad967-63fa-4433-8aad-deb662733831" containerName="registry-server" containerID="cri-o://2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139" gracePeriod=2 Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.350719 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.490925 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw9tk\" (UniqueName: \"kubernetes.io/projected/f1fad967-63fa-4433-8aad-deb662733831-kube-api-access-sw9tk\") pod \"f1fad967-63fa-4433-8aad-deb662733831\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.491516 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-catalog-content\") pod \"f1fad967-63fa-4433-8aad-deb662733831\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.491629 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-utilities\") pod \"f1fad967-63fa-4433-8aad-deb662733831\" (UID: \"f1fad967-63fa-4433-8aad-deb662733831\") " Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.496145 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-utilities" (OuterVolumeSpecName: "utilities") pod "f1fad967-63fa-4433-8aad-deb662733831" (UID: "f1fad967-63fa-4433-8aad-deb662733831"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.503873 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1fad967-63fa-4433-8aad-deb662733831-kube-api-access-sw9tk" (OuterVolumeSpecName: "kube-api-access-sw9tk") pod "f1fad967-63fa-4433-8aad-deb662733831" (UID: "f1fad967-63fa-4433-8aad-deb662733831"). InnerVolumeSpecName "kube-api-access-sw9tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.575030 4930 generic.go:334] "Generic (PLEG): container finished" podID="f1fad967-63fa-4433-8aad-deb662733831" containerID="2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139" exitCode=0 Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.575116 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z7jl" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.575141 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z7jl" event={"ID":"f1fad967-63fa-4433-8aad-deb662733831","Type":"ContainerDied","Data":"2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139"} Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.575200 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z7jl" event={"ID":"f1fad967-63fa-4433-8aad-deb662733831","Type":"ContainerDied","Data":"20a0be0e97d9d629815fb78651be3277b38295e4f04508d0523df6cf41045150"} Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.575224 4930 scope.go:117] "RemoveContainer" containerID="2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.594097 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.594283 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw9tk\" (UniqueName: \"kubernetes.io/projected/f1fad967-63fa-4433-8aad-deb662733831-kube-api-access-sw9tk\") on node \"crc\" DevicePath \"\"" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.597423 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1fad967-63fa-4433-8aad-deb662733831" (UID: "f1fad967-63fa-4433-8aad-deb662733831"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.597880 4930 scope.go:117] "RemoveContainer" containerID="f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.620261 4930 scope.go:117] "RemoveContainer" containerID="03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.662316 4930 scope.go:117] "RemoveContainer" containerID="2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139" Nov 24 13:03:35 crc kubenswrapper[4930]: E1124 13:03:35.662954 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139\": container with ID starting with 2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139 not found: ID does not exist" containerID="2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.663007 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139"} err="failed to get container status \"2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139\": rpc error: code = NotFound desc = could not find container \"2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139\": container with ID starting with 2a5460c667b9af626abdaa10216b8c004bbb48307ec270eb62b2aa049bc7a139 not found: ID does not exist" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.663038 4930 scope.go:117] "RemoveContainer" containerID="f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c" Nov 24 13:03:35 crc kubenswrapper[4930]: E1124 13:03:35.663342 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c\": container with ID starting with f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c not found: ID does not exist" containerID="f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.663379 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c"} err="failed to get container status \"f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c\": rpc error: code = NotFound desc = could not find container \"f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c\": container with ID starting with f7346f73c2402c74bc06ac2a22cd4a5504a12ce5ce0cc9f7f262fb953478699c not found: ID does not exist" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.663400 4930 scope.go:117] "RemoveContainer" containerID="03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1" Nov 24 13:03:35 crc kubenswrapper[4930]: E1124 13:03:35.663732 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1\": container with ID starting with 03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1 not found: ID does not exist" containerID="03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.663756 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1"} err="failed to get container status \"03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1\": rpc error: code = NotFound desc = could not find container \"03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1\": container with ID starting with 03c842e7519ff89e46f306e5a8ba0a36a49c02680ccd145d0e2f0090aee81dc1 not found: ID does not exist" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.696734 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1fad967-63fa-4433-8aad-deb662733831-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.913817 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7z7jl"] Nov 24 13:03:35 crc kubenswrapper[4930]: I1124 13:03:35.924199 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7z7jl"] Nov 24 13:03:36 crc kubenswrapper[4930]: I1124 13:03:36.094917 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1fad967-63fa-4433-8aad-deb662733831" path="/var/lib/kubelet/pods/f1fad967-63fa-4433-8aad-deb662733831/volumes" Nov 24 13:03:46 crc kubenswrapper[4930]: I1124 13:03:46.084590 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:03:46 crc kubenswrapper[4930]: E1124 13:03:46.085662 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.059769 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xfdtg/must-gather-mq7pz"] Nov 24 13:03:59 crc kubenswrapper[4930]: E1124 13:03:59.060766 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerName="registry-server" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.060783 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerName="registry-server" Nov 24 13:03:59 crc kubenswrapper[4930]: E1124 13:03:59.060797 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1fad967-63fa-4433-8aad-deb662733831" containerName="extract-content" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.060803 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1fad967-63fa-4433-8aad-deb662733831" containerName="extract-content" Nov 24 13:03:59 crc kubenswrapper[4930]: E1124 13:03:59.060818 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerName="extract-content" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.060824 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerName="extract-content" Nov 24 13:03:59 crc kubenswrapper[4930]: E1124 13:03:59.060837 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1fad967-63fa-4433-8aad-deb662733831" containerName="extract-utilities" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.060843 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1fad967-63fa-4433-8aad-deb662733831" containerName="extract-utilities" Nov 24 13:03:59 crc kubenswrapper[4930]: E1124 13:03:59.060859 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1fad967-63fa-4433-8aad-deb662733831" containerName="registry-server" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.060864 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1fad967-63fa-4433-8aad-deb662733831" containerName="registry-server" Nov 24 13:03:59 crc kubenswrapper[4930]: E1124 13:03:59.060886 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerName="extract-utilities" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.060891 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerName="extract-utilities" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.061053 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1fad967-63fa-4433-8aad-deb662733831" containerName="registry-server" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.061082 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="102030ab-a063-44ab-9052-a9b9bdde7d61" containerName="registry-server" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.062423 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/must-gather-mq7pz" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.066150 4930 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xfdtg"/"default-dockercfg-pcv8h" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.066759 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xfdtg"/"kube-root-ca.crt" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.067050 4930 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xfdtg"/"openshift-service-ca.crt" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.085185 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.085341 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xfdtg/must-gather-mq7pz"] Nov 24 13:03:59 crc kubenswrapper[4930]: E1124 13:03:59.085481 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.185309 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b32af007-868e-41e2-bb7f-3a6fa74cb42e-must-gather-output\") pod \"must-gather-mq7pz\" (UID: \"b32af007-868e-41e2-bb7f-3a6fa74cb42e\") " pod="openshift-must-gather-xfdtg/must-gather-mq7pz" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.185979 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ccg6\" (UniqueName: \"kubernetes.io/projected/b32af007-868e-41e2-bb7f-3a6fa74cb42e-kube-api-access-6ccg6\") pod \"must-gather-mq7pz\" (UID: \"b32af007-868e-41e2-bb7f-3a6fa74cb42e\") " pod="openshift-must-gather-xfdtg/must-gather-mq7pz" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.288012 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ccg6\" (UniqueName: \"kubernetes.io/projected/b32af007-868e-41e2-bb7f-3a6fa74cb42e-kube-api-access-6ccg6\") pod \"must-gather-mq7pz\" (UID: \"b32af007-868e-41e2-bb7f-3a6fa74cb42e\") " pod="openshift-must-gather-xfdtg/must-gather-mq7pz" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.288086 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b32af007-868e-41e2-bb7f-3a6fa74cb42e-must-gather-output\") pod \"must-gather-mq7pz\" (UID: \"b32af007-868e-41e2-bb7f-3a6fa74cb42e\") " pod="openshift-must-gather-xfdtg/must-gather-mq7pz" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.288627 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b32af007-868e-41e2-bb7f-3a6fa74cb42e-must-gather-output\") pod \"must-gather-mq7pz\" (UID: \"b32af007-868e-41e2-bb7f-3a6fa74cb42e\") " pod="openshift-must-gather-xfdtg/must-gather-mq7pz" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.334228 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ccg6\" (UniqueName: \"kubernetes.io/projected/b32af007-868e-41e2-bb7f-3a6fa74cb42e-kube-api-access-6ccg6\") pod \"must-gather-mq7pz\" (UID: \"b32af007-868e-41e2-bb7f-3a6fa74cb42e\") " pod="openshift-must-gather-xfdtg/must-gather-mq7pz" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.387028 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/must-gather-mq7pz" Nov 24 13:03:59 crc kubenswrapper[4930]: I1124 13:03:59.890031 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xfdtg/must-gather-mq7pz"] Nov 24 13:04:00 crc kubenswrapper[4930]: I1124 13:04:00.822599 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/must-gather-mq7pz" event={"ID":"b32af007-868e-41e2-bb7f-3a6fa74cb42e","Type":"ContainerStarted","Data":"0071c73a80c7c1983f18af63ad710624e4d04d56a561bb9e63d1d832f6d7a114"} Nov 24 13:04:00 crc kubenswrapper[4930]: I1124 13:04:00.823040 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/must-gather-mq7pz" event={"ID":"b32af007-868e-41e2-bb7f-3a6fa74cb42e","Type":"ContainerStarted","Data":"a09bbba8b86383a8fc1ecc2c23c815f30ad8436f9a00aba2229fbf56fe3171ad"} Nov 24 13:04:00 crc kubenswrapper[4930]: I1124 13:04:00.823053 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/must-gather-mq7pz" event={"ID":"b32af007-868e-41e2-bb7f-3a6fa74cb42e","Type":"ContainerStarted","Data":"8d1db07a00ed3e9e94ca26aa0d8ef913d786f465df0fdcb0966315ffede391ef"} Nov 24 13:04:00 crc kubenswrapper[4930]: I1124 13:04:00.848063 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xfdtg/must-gather-mq7pz" podStartSLOduration=1.848036327 podStartE2EDuration="1.848036327s" podCreationTimestamp="2025-11-24 13:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:04:00.840340085 +0000 UTC m=+3887.454668035" watchObservedRunningTime="2025-11-24 13:04:00.848036327 +0000 UTC m=+3887.462364297" Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.068339 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xfdtg/crc-debug-s85kp"] Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.070200 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-s85kp" Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.200068 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cabac2df-8826-488e-a16b-ea61821e4c96-host\") pod \"crc-debug-s85kp\" (UID: \"cabac2df-8826-488e-a16b-ea61821e4c96\") " pod="openshift-must-gather-xfdtg/crc-debug-s85kp" Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.201021 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zdtj\" (UniqueName: \"kubernetes.io/projected/cabac2df-8826-488e-a16b-ea61821e4c96-kube-api-access-5zdtj\") pod \"crc-debug-s85kp\" (UID: \"cabac2df-8826-488e-a16b-ea61821e4c96\") " pod="openshift-must-gather-xfdtg/crc-debug-s85kp" Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.302555 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cabac2df-8826-488e-a16b-ea61821e4c96-host\") pod \"crc-debug-s85kp\" (UID: \"cabac2df-8826-488e-a16b-ea61821e4c96\") " pod="openshift-must-gather-xfdtg/crc-debug-s85kp" Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.302646 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cabac2df-8826-488e-a16b-ea61821e4c96-host\") pod \"crc-debug-s85kp\" (UID: \"cabac2df-8826-488e-a16b-ea61821e4c96\") " pod="openshift-must-gather-xfdtg/crc-debug-s85kp" Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.302966 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zdtj\" (UniqueName: \"kubernetes.io/projected/cabac2df-8826-488e-a16b-ea61821e4c96-kube-api-access-5zdtj\") pod \"crc-debug-s85kp\" (UID: \"cabac2df-8826-488e-a16b-ea61821e4c96\") " pod="openshift-must-gather-xfdtg/crc-debug-s85kp" Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.329473 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zdtj\" (UniqueName: \"kubernetes.io/projected/cabac2df-8826-488e-a16b-ea61821e4c96-kube-api-access-5zdtj\") pod \"crc-debug-s85kp\" (UID: \"cabac2df-8826-488e-a16b-ea61821e4c96\") " pod="openshift-must-gather-xfdtg/crc-debug-s85kp" Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.400674 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-s85kp" Nov 24 13:04:04 crc kubenswrapper[4930]: W1124 13:04:04.434436 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcabac2df_8826_488e_a16b_ea61821e4c96.slice/crio-b4de9f57f2cf4d80a948a73b0f77f656c9fb55c5c7db02d861eefb1d19b03f07 WatchSource:0}: Error finding container b4de9f57f2cf4d80a948a73b0f77f656c9fb55c5c7db02d861eefb1d19b03f07: Status 404 returned error can't find the container with id b4de9f57f2cf4d80a948a73b0f77f656c9fb55c5c7db02d861eefb1d19b03f07 Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.859489 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/crc-debug-s85kp" event={"ID":"cabac2df-8826-488e-a16b-ea61821e4c96","Type":"ContainerStarted","Data":"f5a266edd9ead7725bdb596dd9aff4e6f1a884896ce6b1e0bd119149c85e8fb0"} Nov 24 13:04:04 crc kubenswrapper[4930]: I1124 13:04:04.860121 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/crc-debug-s85kp" event={"ID":"cabac2df-8826-488e-a16b-ea61821e4c96","Type":"ContainerStarted","Data":"b4de9f57f2cf4d80a948a73b0f77f656c9fb55c5c7db02d861eefb1d19b03f07"} Nov 24 13:04:14 crc kubenswrapper[4930]: I1124 13:04:14.091882 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:04:14 crc kubenswrapper[4930]: E1124 13:04:14.093064 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:04:28 crc kubenswrapper[4930]: I1124 13:04:28.085415 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:04:28 crc kubenswrapper[4930]: E1124 13:04:28.086961 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:04:41 crc kubenswrapper[4930]: I1124 13:04:41.179680 4930 generic.go:334] "Generic (PLEG): container finished" podID="cabac2df-8826-488e-a16b-ea61821e4c96" containerID="f5a266edd9ead7725bdb596dd9aff4e6f1a884896ce6b1e0bd119149c85e8fb0" exitCode=0 Nov 24 13:04:41 crc kubenswrapper[4930]: I1124 13:04:41.179765 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/crc-debug-s85kp" event={"ID":"cabac2df-8826-488e-a16b-ea61821e4c96","Type":"ContainerDied","Data":"f5a266edd9ead7725bdb596dd9aff4e6f1a884896ce6b1e0bd119149c85e8fb0"} Nov 24 13:04:42 crc kubenswrapper[4930]: I1124 13:04:42.086359 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:04:42 crc kubenswrapper[4930]: E1124 13:04:42.087782 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:04:42 crc kubenswrapper[4930]: I1124 13:04:42.296390 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-s85kp" Nov 24 13:04:42 crc kubenswrapper[4930]: I1124 13:04:42.332130 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xfdtg/crc-debug-s85kp"] Nov 24 13:04:42 crc kubenswrapper[4930]: I1124 13:04:42.339857 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xfdtg/crc-debug-s85kp"] Nov 24 13:04:42 crc kubenswrapper[4930]: I1124 13:04:42.343393 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zdtj\" (UniqueName: \"kubernetes.io/projected/cabac2df-8826-488e-a16b-ea61821e4c96-kube-api-access-5zdtj\") pod \"cabac2df-8826-488e-a16b-ea61821e4c96\" (UID: \"cabac2df-8826-488e-a16b-ea61821e4c96\") " Nov 24 13:04:42 crc kubenswrapper[4930]: I1124 13:04:42.344639 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cabac2df-8826-488e-a16b-ea61821e4c96-host\") pod \"cabac2df-8826-488e-a16b-ea61821e4c96\" (UID: \"cabac2df-8826-488e-a16b-ea61821e4c96\") " Nov 24 13:04:42 crc kubenswrapper[4930]: I1124 13:04:42.344727 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cabac2df-8826-488e-a16b-ea61821e4c96-host" (OuterVolumeSpecName: "host") pod "cabac2df-8826-488e-a16b-ea61821e4c96" (UID: "cabac2df-8826-488e-a16b-ea61821e4c96"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:04:42 crc kubenswrapper[4930]: I1124 13:04:42.345104 4930 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cabac2df-8826-488e-a16b-ea61821e4c96-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:04:42 crc kubenswrapper[4930]: I1124 13:04:42.349268 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cabac2df-8826-488e-a16b-ea61821e4c96-kube-api-access-5zdtj" (OuterVolumeSpecName: "kube-api-access-5zdtj") pod "cabac2df-8826-488e-a16b-ea61821e4c96" (UID: "cabac2df-8826-488e-a16b-ea61821e4c96"). InnerVolumeSpecName "kube-api-access-5zdtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:04:42 crc kubenswrapper[4930]: I1124 13:04:42.446662 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zdtj\" (UniqueName: \"kubernetes.io/projected/cabac2df-8826-488e-a16b-ea61821e4c96-kube-api-access-5zdtj\") on node \"crc\" DevicePath \"\"" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.195812 4930 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4de9f57f2cf4d80a948a73b0f77f656c9fb55c5c7db02d861eefb1d19b03f07" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.196191 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-s85kp" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.500970 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xfdtg/crc-debug-9jh9f"] Nov 24 13:04:43 crc kubenswrapper[4930]: E1124 13:04:43.501370 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabac2df-8826-488e-a16b-ea61821e4c96" containerName="container-00" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.501386 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabac2df-8826-488e-a16b-ea61821e4c96" containerName="container-00" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.501586 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="cabac2df-8826-488e-a16b-ea61821e4c96" containerName="container-00" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.502273 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.569390 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/574bd9f0-bede-4738-925e-334fafd84da4-host\") pod \"crc-debug-9jh9f\" (UID: \"574bd9f0-bede-4738-925e-334fafd84da4\") " pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.569447 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdln6\" (UniqueName: \"kubernetes.io/projected/574bd9f0-bede-4738-925e-334fafd84da4-kube-api-access-tdln6\") pod \"crc-debug-9jh9f\" (UID: \"574bd9f0-bede-4738-925e-334fafd84da4\") " pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.671277 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/574bd9f0-bede-4738-925e-334fafd84da4-host\") pod \"crc-debug-9jh9f\" (UID: \"574bd9f0-bede-4738-925e-334fafd84da4\") " pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.671347 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdln6\" (UniqueName: \"kubernetes.io/projected/574bd9f0-bede-4738-925e-334fafd84da4-kube-api-access-tdln6\") pod \"crc-debug-9jh9f\" (UID: \"574bd9f0-bede-4738-925e-334fafd84da4\") " pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.671843 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/574bd9f0-bede-4738-925e-334fafd84da4-host\") pod \"crc-debug-9jh9f\" (UID: \"574bd9f0-bede-4738-925e-334fafd84da4\") " pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.688479 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdln6\" (UniqueName: \"kubernetes.io/projected/574bd9f0-bede-4738-925e-334fafd84da4-kube-api-access-tdln6\") pod \"crc-debug-9jh9f\" (UID: \"574bd9f0-bede-4738-925e-334fafd84da4\") " pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" Nov 24 13:04:43 crc kubenswrapper[4930]: I1124 13:04:43.821172 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" Nov 24 13:04:44 crc kubenswrapper[4930]: I1124 13:04:44.099716 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cabac2df-8826-488e-a16b-ea61821e4c96" path="/var/lib/kubelet/pods/cabac2df-8826-488e-a16b-ea61821e4c96/volumes" Nov 24 13:04:44 crc kubenswrapper[4930]: I1124 13:04:44.205748 4930 generic.go:334] "Generic (PLEG): container finished" podID="574bd9f0-bede-4738-925e-334fafd84da4" containerID="e5c8c4bf19616589a5b0385ddea34541babfae0e4bb99a4611b4be237f57f7d5" exitCode=0 Nov 24 13:04:44 crc kubenswrapper[4930]: I1124 13:04:44.205792 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" event={"ID":"574bd9f0-bede-4738-925e-334fafd84da4","Type":"ContainerDied","Data":"e5c8c4bf19616589a5b0385ddea34541babfae0e4bb99a4611b4be237f57f7d5"} Nov 24 13:04:44 crc kubenswrapper[4930]: I1124 13:04:44.205817 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" event={"ID":"574bd9f0-bede-4738-925e-334fafd84da4","Type":"ContainerStarted","Data":"b1ed560eabe6f0a324a2c2c7895f5c88acc55511830f602811e89c52c182e5ba"} Nov 24 13:04:44 crc kubenswrapper[4930]: I1124 13:04:44.657736 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xfdtg/crc-debug-9jh9f"] Nov 24 13:04:44 crc kubenswrapper[4930]: I1124 13:04:44.666571 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xfdtg/crc-debug-9jh9f"] Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.309901 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.401749 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdln6\" (UniqueName: \"kubernetes.io/projected/574bd9f0-bede-4738-925e-334fafd84da4-kube-api-access-tdln6\") pod \"574bd9f0-bede-4738-925e-334fafd84da4\" (UID: \"574bd9f0-bede-4738-925e-334fafd84da4\") " Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.401951 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/574bd9f0-bede-4738-925e-334fafd84da4-host\") pod \"574bd9f0-bede-4738-925e-334fafd84da4\" (UID: \"574bd9f0-bede-4738-925e-334fafd84da4\") " Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.402087 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/574bd9f0-bede-4738-925e-334fafd84da4-host" (OuterVolumeSpecName: "host") pod "574bd9f0-bede-4738-925e-334fafd84da4" (UID: "574bd9f0-bede-4738-925e-334fafd84da4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.402708 4930 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/574bd9f0-bede-4738-925e-334fafd84da4-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.406897 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574bd9f0-bede-4738-925e-334fafd84da4-kube-api-access-tdln6" (OuterVolumeSpecName: "kube-api-access-tdln6") pod "574bd9f0-bede-4738-925e-334fafd84da4" (UID: "574bd9f0-bede-4738-925e-334fafd84da4"). InnerVolumeSpecName "kube-api-access-tdln6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.504914 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdln6\" (UniqueName: \"kubernetes.io/projected/574bd9f0-bede-4738-925e-334fafd84da4-kube-api-access-tdln6\") on node \"crc\" DevicePath \"\"" Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.825470 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xfdtg/crc-debug-cj8gd"] Nov 24 13:04:45 crc kubenswrapper[4930]: E1124 13:04:45.825888 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574bd9f0-bede-4738-925e-334fafd84da4" containerName="container-00" Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.825901 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="574bd9f0-bede-4738-925e-334fafd84da4" containerName="container-00" Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.826080 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="574bd9f0-bede-4738-925e-334fafd84da4" containerName="container-00" Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.826696 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.912492 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4qp\" (UniqueName: \"kubernetes.io/projected/cecb9548-9b8e-4697-9afe-be08c822742e-kube-api-access-pp4qp\") pod \"crc-debug-cj8gd\" (UID: \"cecb9548-9b8e-4697-9afe-be08c822742e\") " pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" Nov 24 13:04:45 crc kubenswrapper[4930]: I1124 13:04:45.912577 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cecb9548-9b8e-4697-9afe-be08c822742e-host\") pod \"crc-debug-cj8gd\" (UID: \"cecb9548-9b8e-4697-9afe-be08c822742e\") " pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" Nov 24 13:04:46 crc kubenswrapper[4930]: I1124 13:04:46.014729 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp4qp\" (UniqueName: \"kubernetes.io/projected/cecb9548-9b8e-4697-9afe-be08c822742e-kube-api-access-pp4qp\") pod \"crc-debug-cj8gd\" (UID: \"cecb9548-9b8e-4697-9afe-be08c822742e\") " pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" Nov 24 13:04:46 crc kubenswrapper[4930]: I1124 13:04:46.014777 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cecb9548-9b8e-4697-9afe-be08c822742e-host\") pod \"crc-debug-cj8gd\" (UID: \"cecb9548-9b8e-4697-9afe-be08c822742e\") " pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" Nov 24 13:04:46 crc kubenswrapper[4930]: I1124 13:04:46.014995 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cecb9548-9b8e-4697-9afe-be08c822742e-host\") pod \"crc-debug-cj8gd\" (UID: \"cecb9548-9b8e-4697-9afe-be08c822742e\") " pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" Nov 24 13:04:46 crc kubenswrapper[4930]: I1124 13:04:46.031453 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp4qp\" (UniqueName: \"kubernetes.io/projected/cecb9548-9b8e-4697-9afe-be08c822742e-kube-api-access-pp4qp\") pod \"crc-debug-cj8gd\" (UID: \"cecb9548-9b8e-4697-9afe-be08c822742e\") " pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" Nov 24 13:04:46 crc kubenswrapper[4930]: I1124 13:04:46.095362 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574bd9f0-bede-4738-925e-334fafd84da4" path="/var/lib/kubelet/pods/574bd9f0-bede-4738-925e-334fafd84da4/volumes" Nov 24 13:04:46 crc kubenswrapper[4930]: I1124 13:04:46.154968 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" Nov 24 13:04:46 crc kubenswrapper[4930]: W1124 13:04:46.191070 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcecb9548_9b8e_4697_9afe_be08c822742e.slice/crio-f518066a7d067249035cfda78b508c57259af07b0aa056da7fc8ee6a2852bde9 WatchSource:0}: Error finding container f518066a7d067249035cfda78b508c57259af07b0aa056da7fc8ee6a2852bde9: Status 404 returned error can't find the container with id f518066a7d067249035cfda78b508c57259af07b0aa056da7fc8ee6a2852bde9 Nov 24 13:04:46 crc kubenswrapper[4930]: I1124 13:04:46.223034 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-9jh9f" Nov 24 13:04:46 crc kubenswrapper[4930]: I1124 13:04:46.223042 4930 scope.go:117] "RemoveContainer" containerID="e5c8c4bf19616589a5b0385ddea34541babfae0e4bb99a4611b4be237f57f7d5" Nov 24 13:04:46 crc kubenswrapper[4930]: I1124 13:04:46.224499 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" event={"ID":"cecb9548-9b8e-4697-9afe-be08c822742e","Type":"ContainerStarted","Data":"f518066a7d067249035cfda78b508c57259af07b0aa056da7fc8ee6a2852bde9"} Nov 24 13:04:47 crc kubenswrapper[4930]: I1124 13:04:47.236806 4930 generic.go:334] "Generic (PLEG): container finished" podID="cecb9548-9b8e-4697-9afe-be08c822742e" containerID="b9366c51220275b1a9d4dfc188f5b3f8c78e3112699ec476860be71382c6f5ad" exitCode=0 Nov 24 13:04:47 crc kubenswrapper[4930]: I1124 13:04:47.236903 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" event={"ID":"cecb9548-9b8e-4697-9afe-be08c822742e","Type":"ContainerDied","Data":"b9366c51220275b1a9d4dfc188f5b3f8c78e3112699ec476860be71382c6f5ad"} Nov 24 13:04:47 crc kubenswrapper[4930]: I1124 13:04:47.279220 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xfdtg/crc-debug-cj8gd"] Nov 24 13:04:47 crc kubenswrapper[4930]: I1124 13:04:47.292811 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xfdtg/crc-debug-cj8gd"] Nov 24 13:04:48 crc kubenswrapper[4930]: I1124 13:04:48.361662 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" Nov 24 13:04:48 crc kubenswrapper[4930]: I1124 13:04:48.467440 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cecb9548-9b8e-4697-9afe-be08c822742e-host\") pod \"cecb9548-9b8e-4697-9afe-be08c822742e\" (UID: \"cecb9548-9b8e-4697-9afe-be08c822742e\") " Nov 24 13:04:48 crc kubenswrapper[4930]: I1124 13:04:48.467522 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp4qp\" (UniqueName: \"kubernetes.io/projected/cecb9548-9b8e-4697-9afe-be08c822742e-kube-api-access-pp4qp\") pod \"cecb9548-9b8e-4697-9afe-be08c822742e\" (UID: \"cecb9548-9b8e-4697-9afe-be08c822742e\") " Nov 24 13:04:48 crc kubenswrapper[4930]: I1124 13:04:48.467626 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb9548-9b8e-4697-9afe-be08c822742e-host" (OuterVolumeSpecName: "host") pod "cecb9548-9b8e-4697-9afe-be08c822742e" (UID: "cecb9548-9b8e-4697-9afe-be08c822742e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:04:48 crc kubenswrapper[4930]: I1124 13:04:48.468306 4930 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cecb9548-9b8e-4697-9afe-be08c822742e-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:04:48 crc kubenswrapper[4930]: I1124 13:04:48.473803 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cecb9548-9b8e-4697-9afe-be08c822742e-kube-api-access-pp4qp" (OuterVolumeSpecName: "kube-api-access-pp4qp") pod "cecb9548-9b8e-4697-9afe-be08c822742e" (UID: "cecb9548-9b8e-4697-9afe-be08c822742e"). InnerVolumeSpecName "kube-api-access-pp4qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:04:48 crc kubenswrapper[4930]: I1124 13:04:48.571378 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp4qp\" (UniqueName: \"kubernetes.io/projected/cecb9548-9b8e-4697-9afe-be08c822742e-kube-api-access-pp4qp\") on node \"crc\" DevicePath \"\"" Nov 24 13:04:49 crc kubenswrapper[4930]: I1124 13:04:49.265278 4930 scope.go:117] "RemoveContainer" containerID="b9366c51220275b1a9d4dfc188f5b3f8c78e3112699ec476860be71382c6f5ad" Nov 24 13:04:49 crc kubenswrapper[4930]: I1124 13:04:49.265435 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/crc-debug-cj8gd" Nov 24 13:04:50 crc kubenswrapper[4930]: I1124 13:04:50.094276 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cecb9548-9b8e-4697-9afe-be08c822742e" path="/var/lib/kubelet/pods/cecb9548-9b8e-4697-9afe-be08c822742e/volumes" Nov 24 13:04:56 crc kubenswrapper[4930]: I1124 13:04:56.084117 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:04:56 crc kubenswrapper[4930]: E1124 13:04:56.085014 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:05:09 crc kubenswrapper[4930]: I1124 13:05:09.347155 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-84d7bcd766-9sdc2_513243cf-0c25-46b1-a535-906324dca4bb/barbican-api/0.log" Nov 24 13:05:09 crc kubenswrapper[4930]: I1124 13:05:09.362213 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-84d7bcd766-9sdc2_513243cf-0c25-46b1-a535-906324dca4bb/barbican-api-log/0.log" Nov 24 13:05:09 crc kubenswrapper[4930]: I1124 13:05:09.532299 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86bf5c4cf6-tbptj_b24c3d9b-ee6d-47ef-9391-91a395edbfbd/barbican-keystone-listener/0.log" Nov 24 13:05:09 crc kubenswrapper[4930]: I1124 13:05:09.544684 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86bf5c4cf6-tbptj_b24c3d9b-ee6d-47ef-9391-91a395edbfbd/barbican-keystone-listener-log/0.log" Nov 24 13:05:09 crc kubenswrapper[4930]: I1124 13:05:09.593663 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b86dc847c-csn2f_4a583517-6311-464a-b855-2a2d1e788461/barbican-worker/0.log" Nov 24 13:05:09 crc kubenswrapper[4930]: I1124 13:05:09.719197 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b86dc847c-csn2f_4a583517-6311-464a-b855-2a2d1e788461/barbican-worker-log/0.log" Nov 24 13:05:09 crc kubenswrapper[4930]: I1124 13:05:09.792291 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5wn8l_c3f7af8b-b5d0-4361-ada0-42f01955a7d5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:09 crc kubenswrapper[4930]: I1124 13:05:09.937515 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5163ee34-cf81-4983-a359-1224b73676fe/ceilometer-notification-agent/0.log" Nov 24 13:05:09 crc kubenswrapper[4930]: I1124 13:05:09.944287 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5163ee34-cf81-4983-a359-1224b73676fe/ceilometer-central-agent/0.log" Nov 24 13:05:09 crc kubenswrapper[4930]: I1124 13:05:09.959383 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5163ee34-cf81-4983-a359-1224b73676fe/proxy-httpd/0.log" Nov 24 13:05:10 crc kubenswrapper[4930]: I1124 13:05:10.036004 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5163ee34-cf81-4983-a359-1224b73676fe/sg-core/0.log" Nov 24 13:05:10 crc kubenswrapper[4930]: I1124 13:05:10.155484 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a527a579-00ed-4438-b675-70c5baefb0d9/cinder-api/0.log" Nov 24 13:05:10 crc kubenswrapper[4930]: I1124 13:05:10.213878 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a527a579-00ed-4438-b675-70c5baefb0d9/cinder-api-log/0.log" Nov 24 13:05:10 crc kubenswrapper[4930]: I1124 13:05:10.601181 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4f3ac20-aa87-48a4-9980-08b8ca2053ef/cinder-scheduler/0.log" Nov 24 13:05:10 crc kubenswrapper[4930]: I1124 13:05:10.651818 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4f3ac20-aa87-48a4-9980-08b8ca2053ef/probe/0.log" Nov 24 13:05:10 crc kubenswrapper[4930]: I1124 13:05:10.699273 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hdhkj_2e059ba1-d1de-4764-afd1-50b78af12ce8/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:11 crc kubenswrapper[4930]: I1124 13:05:11.084269 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:05:11 crc kubenswrapper[4930]: E1124 13:05:11.085390 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:05:11 crc kubenswrapper[4930]: I1124 13:05:11.304838 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2qvgj_7dab908a-df78-4c5a-945f-25221b75df7a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:11 crc kubenswrapper[4930]: I1124 13:05:11.309691 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64858ddbd7-fd6z9_9773394a-0a7d-40f6-a556-d3feb5acaf9d/init/0.log" Nov 24 13:05:11 crc kubenswrapper[4930]: I1124 13:05:11.502591 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64858ddbd7-fd6z9_9773394a-0a7d-40f6-a556-d3feb5acaf9d/init/0.log" Nov 24 13:05:11 crc kubenswrapper[4930]: I1124 13:05:11.558827 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64858ddbd7-fd6z9_9773394a-0a7d-40f6-a556-d3feb5acaf9d/dnsmasq-dns/0.log" Nov 24 13:05:11 crc kubenswrapper[4930]: I1124 13:05:11.635555 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-z8pjt_94e8669b-69a8-41fb-ab05-d2e913495e16/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:11 crc kubenswrapper[4930]: I1124 13:05:11.827167 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_368b80c7-cc7d-4d6a-8b4d-90ea32596bf9/glance-httpd/0.log" Nov 24 13:05:11 crc kubenswrapper[4930]: I1124 13:05:11.873324 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_368b80c7-cc7d-4d6a-8b4d-90ea32596bf9/glance-log/0.log" Nov 24 13:05:12 crc kubenswrapper[4930]: I1124 13:05:12.021993 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3/glance-log/0.log" Nov 24 13:05:12 crc kubenswrapper[4930]: I1124 13:05:12.027812 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c85ea18-c6f1-40ab-9ca5-2a7b5ca9bdc3/glance-httpd/0.log" Nov 24 13:05:12 crc kubenswrapper[4930]: I1124 13:05:12.165436 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b7594b454-4gfnw_8851e459-770d-4a08-8b35-41e3e060608b/horizon/0.log" Nov 24 13:05:12 crc kubenswrapper[4930]: I1124 13:05:12.432972 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4zs2w_dbe1f36a-7423-4635-bc7e-7ad5ba208b8b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:12 crc kubenswrapper[4930]: I1124 13:05:12.631818 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b7594b454-4gfnw_8851e459-770d-4a08-8b35-41e3e060608b/horizon-log/0.log" Nov 24 13:05:12 crc kubenswrapper[4930]: I1124 13:05:12.665314 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-xrq57_3d1f272c-dd97-4e6b-aa3f-7e9af6a15dc9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:12 crc kubenswrapper[4930]: I1124 13:05:12.842089 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7d65b7d547-xbx74_cddd20a0-4ab1-4747-86ec-3dbd6ae06f74/keystone-api/0.log" Nov 24 13:05:12 crc kubenswrapper[4930]: I1124 13:05:12.954996 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29399821-dsqq9_94a0bff7-6443-46d2-8696-cdbbfde75f76/keystone-cron/0.log" Nov 24 13:05:13 crc kubenswrapper[4930]: I1124 13:05:13.038945 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_43eb3b2e-759d-46b8-885a-222b5d97e1c6/kube-state-metrics/0.log" Nov 24 13:05:13 crc kubenswrapper[4930]: I1124 13:05:13.211831 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-p5bj7_e0bbfe8b-57c3-4ce3-b0d0-824404ef7a0c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:13 crc kubenswrapper[4930]: I1124 13:05:13.550303 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-785757c67f-sl8rq_c3722de2-f333-4130-97bb-d2377fc9052f/neutron-api/0.log" Nov 24 13:05:13 crc kubenswrapper[4930]: I1124 13:05:13.567044 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-785757c67f-sl8rq_c3722de2-f333-4130-97bb-d2377fc9052f/neutron-httpd/0.log" Nov 24 13:05:13 crc kubenswrapper[4930]: I1124 13:05:13.640050 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-h4vqc_2601017f-22e2-4b92-a224-ea216464d20a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:14 crc kubenswrapper[4930]: I1124 13:05:14.157296 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ae96b7cf-94c8-4f24-bc63-3b0a529f09e5/nova-api-log/0.log" Nov 24 13:05:14 crc kubenswrapper[4930]: I1124 13:05:14.296673 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3920619d-9d6b-4e91-a1ad-d0ee7fe1cb09/nova-cell0-conductor-conductor/0.log" Nov 24 13:05:14 crc kubenswrapper[4930]: I1124 13:05:14.526800 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_cd764c7d-ba7d-4a99-8988-863d9cd6ad03/nova-cell1-conductor-conductor/0.log" Nov 24 13:05:14 crc kubenswrapper[4930]: I1124 13:05:14.615731 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ae96b7cf-94c8-4f24-bc63-3b0a529f09e5/nova-api-api/0.log" Nov 24 13:05:14 crc kubenswrapper[4930]: I1124 13:05:14.636615 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8d796659-c1c3-48aa-94eb-e16a14f8a0c8/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 13:05:14 crc kubenswrapper[4930]: I1124 13:05:14.773355 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-x7cw9_b5e86381-1bbe-4708-a86f-da5db51c1fb7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:14 crc kubenswrapper[4930]: I1124 13:05:14.913133 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5758b132-d70a-4597-87b7-f172d1e8560a/nova-metadata-log/0.log" Nov 24 13:05:15 crc kubenswrapper[4930]: I1124 13:05:15.300197 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64612891-0a55-4622-8888-d141a949c665/mysql-bootstrap/0.log" Nov 24 13:05:15 crc kubenswrapper[4930]: I1124 13:05:15.371216 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7ec8562f-0cac-4105-9a8e-ba98bf34a944/nova-scheduler-scheduler/0.log" Nov 24 13:05:15 crc kubenswrapper[4930]: I1124 13:05:15.493063 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64612891-0a55-4622-8888-d141a949c665/mysql-bootstrap/0.log" Nov 24 13:05:15 crc kubenswrapper[4930]: I1124 13:05:15.502080 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64612891-0a55-4622-8888-d141a949c665/galera/0.log" Nov 24 13:05:15 crc kubenswrapper[4930]: I1124 13:05:15.680045 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bddca103-daee-4f61-9165-1f6ec4762bd1/mysql-bootstrap/0.log" Nov 24 13:05:15 crc kubenswrapper[4930]: I1124 13:05:15.906415 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bddca103-daee-4f61-9165-1f6ec4762bd1/mysql-bootstrap/0.log" Nov 24 13:05:15 crc kubenswrapper[4930]: I1124 13:05:15.931029 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bddca103-daee-4f61-9165-1f6ec4762bd1/galera/0.log" Nov 24 13:05:16 crc kubenswrapper[4930]: I1124 13:05:16.106167 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1416edd0-b4e2-4acb-a449-1e9d40e9b2f5/openstackclient/0.log" Nov 24 13:05:16 crc kubenswrapper[4930]: I1124 13:05:16.147994 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fnxs8_b4686e3a-6cd1-4ada-a593-a7cfa2598257/openstack-network-exporter/0.log" Nov 24 13:05:16 crc kubenswrapper[4930]: I1124 13:05:16.390532 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5758b132-d70a-4597-87b7-f172d1e8560a/nova-metadata-metadata/0.log" Nov 24 13:05:16 crc kubenswrapper[4930]: I1124 13:05:16.411565 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5rmd_47adcfa9-c402-4f40-b558-bb2a56d93293/ovsdb-server-init/0.log" Nov 24 13:05:16 crc kubenswrapper[4930]: I1124 13:05:16.562929 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5rmd_47adcfa9-c402-4f40-b558-bb2a56d93293/ovsdb-server-init/0.log" Nov 24 13:05:16 crc kubenswrapper[4930]: I1124 13:05:16.570266 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5rmd_47adcfa9-c402-4f40-b558-bb2a56d93293/ovsdb-server/0.log" Nov 24 13:05:16 crc kubenswrapper[4930]: I1124 13:05:16.618134 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5rmd_47adcfa9-c402-4f40-b558-bb2a56d93293/ovs-vswitchd/0.log" Nov 24 13:05:16 crc kubenswrapper[4930]: I1124 13:05:16.815419 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-r7nwq_ce96cb2b-064b-4d76-a101-df9f31c86314/ovn-controller/0.log" Nov 24 13:05:16 crc kubenswrapper[4930]: I1124 13:05:16.866342 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-t562k_48d052f4-e44f-45e2-856a-08346f84f5b8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:16 crc kubenswrapper[4930]: I1124 13:05:16.993795 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1/openstack-network-exporter/0.log" Nov 24 13:05:17 crc kubenswrapper[4930]: I1124 13:05:17.076255 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6bf451fc-eaa9-4b59-a451-0c7b68e3d5b1/ovn-northd/0.log" Nov 24 13:05:17 crc kubenswrapper[4930]: I1124 13:05:17.185563 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3118a4f6-bfb6-4646-a543-2f2dcbf03681/openstack-network-exporter/0.log" Nov 24 13:05:17 crc kubenswrapper[4930]: I1124 13:05:17.258215 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3118a4f6-bfb6-4646-a543-2f2dcbf03681/ovsdbserver-nb/0.log" Nov 24 13:05:17 crc kubenswrapper[4930]: I1124 13:05:17.369957 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_abae5d96-d4bd-42db-8517-ac6defbb22f2/openstack-network-exporter/0.log" Nov 24 13:05:17 crc kubenswrapper[4930]: I1124 13:05:17.407604 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_abae5d96-d4bd-42db-8517-ac6defbb22f2/ovsdbserver-sb/0.log" Nov 24 13:05:17 crc kubenswrapper[4930]: I1124 13:05:17.680172 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-784c754f4d-ttmj6_bb758c76-2ee4-4bac-8a07-d44205706854/placement-api/0.log" Nov 24 13:05:17 crc kubenswrapper[4930]: I1124 13:05:17.771121 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2247968a-aee9-4461-afd9-cfb36cc1f6fd/setup-container/0.log" Nov 24 13:05:17 crc kubenswrapper[4930]: I1124 13:05:17.785947 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-784c754f4d-ttmj6_bb758c76-2ee4-4bac-8a07-d44205706854/placement-log/0.log" Nov 24 13:05:17 crc kubenswrapper[4930]: I1124 13:05:17.952656 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2247968a-aee9-4461-afd9-cfb36cc1f6fd/setup-container/0.log" Nov 24 13:05:17 crc kubenswrapper[4930]: I1124 13:05:17.956446 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2247968a-aee9-4461-afd9-cfb36cc1f6fd/rabbitmq/0.log" Nov 24 13:05:18 crc kubenswrapper[4930]: I1124 13:05:18.019323 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5fe79a3-de03-466f-bf55-2d8c8259895a/setup-container/0.log" Nov 24 13:05:18 crc kubenswrapper[4930]: I1124 13:05:18.339982 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5fe79a3-de03-466f-bf55-2d8c8259895a/rabbitmq/0.log" Nov 24 13:05:18 crc kubenswrapper[4930]: I1124 13:05:18.364972 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-84f87_05ff1b01-0d59-4a45-9683-41ae2e8163bc/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:18 crc kubenswrapper[4930]: I1124 13:05:18.396925 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5fe79a3-de03-466f-bf55-2d8c8259895a/setup-container/0.log" Nov 24 13:05:18 crc kubenswrapper[4930]: I1124 13:05:18.596949 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-gcq7j_7b4b0309-31fd-407f-a03f-df928fd4675b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:18 crc kubenswrapper[4930]: I1124 13:05:18.698793 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-25f8s_29211cc5-c7d0-4aa9-9456-3313e20d2e1d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:19 crc kubenswrapper[4930]: I1124 13:05:19.298443 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gglqx_fea938c9-2678-4985-bbe3-8f15d9a3302b/ssh-known-hosts-edpm-deployment/0.log" Nov 24 13:05:19 crc kubenswrapper[4930]: I1124 13:05:19.400159 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gzmr4_2454068c-7c38-4a67-8830-63a6b0add307/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:19 crc kubenswrapper[4930]: I1124 13:05:19.650392 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f4c64f46c-fdhkr_7544a665-a649-46c1-b2e2-4f0179645890/proxy-server/0.log" Nov 24 13:05:19 crc kubenswrapper[4930]: I1124 13:05:19.732801 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2gmcp_066844af-3950-4700-84c4-3c1043ad05e7/swift-ring-rebalance/0.log" Nov 24 13:05:19 crc kubenswrapper[4930]: I1124 13:05:19.766801 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f4c64f46c-fdhkr_7544a665-a649-46c1-b2e2-4f0179645890/proxy-httpd/0.log" Nov 24 13:05:19 crc kubenswrapper[4930]: I1124 13:05:19.873777 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/account-auditor/0.log" Nov 24 13:05:19 crc kubenswrapper[4930]: I1124 13:05:19.939085 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/account-reaper/0.log" Nov 24 13:05:20 crc kubenswrapper[4930]: I1124 13:05:20.093452 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/account-server/0.log" Nov 24 13:05:20 crc kubenswrapper[4930]: I1124 13:05:20.113869 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/account-replicator/0.log" Nov 24 13:05:20 crc kubenswrapper[4930]: I1124 13:05:20.200025 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/container-replicator/0.log" Nov 24 13:05:20 crc kubenswrapper[4930]: I1124 13:05:20.212234 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/container-auditor/0.log" Nov 24 13:05:20 crc kubenswrapper[4930]: I1124 13:05:20.287704 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/container-updater/0.log" Nov 24 13:05:20 crc kubenswrapper[4930]: I1124 13:05:20.293906 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/container-server/0.log" Nov 24 13:05:20 crc kubenswrapper[4930]: I1124 13:05:20.418991 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/object-auditor/0.log" Nov 24 13:05:20 crc kubenswrapper[4930]: I1124 13:05:20.426053 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/object-expirer/0.log" Nov 24 13:05:20 crc kubenswrapper[4930]: I1124 13:05:20.512219 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/object-server/0.log" Nov 24 13:05:20 crc kubenswrapper[4930]: I1124 13:05:20.513673 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/object-replicator/0.log" Nov 24 13:05:21 crc kubenswrapper[4930]: I1124 13:05:21.056246 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/rsync/0.log" Nov 24 13:05:21 crc kubenswrapper[4930]: I1124 13:05:21.082145 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/object-updater/0.log" Nov 24 13:05:21 crc kubenswrapper[4930]: I1124 13:05:21.087881 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cc2e28ee-ab31-4a3a-b2a8-0b8c6baf1652/swift-recon-cron/0.log" Nov 24 13:05:21 crc kubenswrapper[4930]: I1124 13:05:21.366274 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6a7fbabe-a7e2-469c-b6aa-22973dd510b3/tempest-tests-tempest-tests-runner/0.log" Nov 24 13:05:21 crc kubenswrapper[4930]: I1124 13:05:21.400995 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-cg8k6_e5f020e4-dece-42e7-b327-99797d3b447f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:21 crc kubenswrapper[4930]: I1124 13:05:21.517857 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7ecdd72c-294a-43fa-bd7a-edf2e10447fd/test-operator-logs-container/0.log" Nov 24 13:05:21 crc kubenswrapper[4930]: I1124 13:05:21.602505 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-pzrgv_4c6db01a-e2b7-4cc6-a8eb-1b2c1b62a301/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:05:23 crc kubenswrapper[4930]: I1124 13:05:23.084232 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:05:23 crc kubenswrapper[4930]: E1124 13:05:23.084554 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:05:30 crc kubenswrapper[4930]: I1124 13:05:30.909429 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ca5fe78e-8ed4-4f0f-ae80-7760b1bb5afa/memcached/0.log" Nov 24 13:05:37 crc kubenswrapper[4930]: I1124 13:05:37.085149 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:05:37 crc kubenswrapper[4930]: E1124 13:05:37.086062 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:05:48 crc kubenswrapper[4930]: I1124 13:05:48.089081 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:05:48 crc kubenswrapper[4930]: E1124 13:05:48.090684 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:05:49 crc kubenswrapper[4930]: I1124 13:05:49.702772 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/util/0.log" Nov 24 13:05:49 crc kubenswrapper[4930]: I1124 13:05:49.986259 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/pull/0.log" Nov 24 13:05:49 crc kubenswrapper[4930]: I1124 13:05:49.998919 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/util/0.log" Nov 24 13:05:50 crc kubenswrapper[4930]: I1124 13:05:50.000076 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/pull/0.log" Nov 24 13:05:50 crc kubenswrapper[4930]: I1124 13:05:50.193528 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/util/0.log" Nov 24 13:05:50 crc kubenswrapper[4930]: I1124 13:05:50.201004 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/pull/0.log" Nov 24 13:05:50 crc kubenswrapper[4930]: I1124 13:05:50.205372 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287bpzfg_7294a2f2-e7f6-489a-8520-a079269ea728/extract/0.log" Nov 24 13:05:50 crc kubenswrapper[4930]: I1124 13:05:50.858765 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-wqr7x_0752fe04-d0ea-4225-8e86-62c70618a5a1/manager/0.log" Nov 24 13:05:50 crc kubenswrapper[4930]: I1124 13:05:50.875087 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-wqr7x_0752fe04-d0ea-4225-8e86-62c70618a5a1/kube-rbac-proxy/0.log" Nov 24 13:05:50 crc kubenswrapper[4930]: I1124 13:05:50.905031 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-56s9w_cf778eca-e1fc-4619-9a85-aeda0fac014b/kube-rbac-proxy/0.log" Nov 24 13:05:51 crc kubenswrapper[4930]: I1124 13:05:51.072532 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-56s9w_cf778eca-e1fc-4619-9a85-aeda0fac014b/manager/0.log" Nov 24 13:05:51 crc kubenswrapper[4930]: I1124 13:05:51.091219 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-wn7d4_96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b/kube-rbac-proxy/0.log" Nov 24 13:05:51 crc kubenswrapper[4930]: I1124 13:05:51.113589 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-wn7d4_96a9f3c5-4eaa-4265-9c3e-f0c54dd0df0b/manager/0.log" Nov 24 13:05:51 crc kubenswrapper[4930]: I1124 13:05:51.265221 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-2jhpd_2115a6ba-c1ea-45f6-a340-7ccd67a77bbd/kube-rbac-proxy/0.log" Nov 24 13:05:51 crc kubenswrapper[4930]: I1124 13:05:51.384293 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-2jhpd_2115a6ba-c1ea-45f6-a340-7ccd67a77bbd/manager/0.log" Nov 24 13:05:51 crc kubenswrapper[4930]: I1124 13:05:51.498135 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-22kp5_525584f5-a41b-4189-986d-32f6c4e6bc16/kube-rbac-proxy/0.log" Nov 24 13:05:51 crc kubenswrapper[4930]: I1124 13:05:51.521099 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-22kp5_525584f5-a41b-4189-986d-32f6c4e6bc16/manager/0.log" Nov 24 13:05:51 crc kubenswrapper[4930]: I1124 13:05:51.645243 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-4svhq_5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e/kube-rbac-proxy/0.log" Nov 24 13:05:51 crc kubenswrapper[4930]: I1124 13:05:51.727683 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-4svhq_5597a8b4-0ed5-4a54-ba70-6b2a7b9d2a8e/manager/0.log" Nov 24 13:05:51 crc kubenswrapper[4930]: I1124 13:05:51.838666 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-z7ftj_606a5459-832e-4986-a171-4fd89e3ee1ec/kube-rbac-proxy/0.log" Nov 24 13:05:52 crc kubenswrapper[4930]: I1124 13:05:52.050682 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-z7ftj_606a5459-832e-4986-a171-4fd89e3ee1ec/manager/0.log" Nov 24 13:05:52 crc kubenswrapper[4930]: I1124 13:05:52.224184 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-ngxgx_2652d83c-0fb2-41a7-a372-2f8e48ea33cc/kube-rbac-proxy/0.log" Nov 24 13:05:52 crc kubenswrapper[4930]: I1124 13:05:52.304810 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-ngxgx_2652d83c-0fb2-41a7-a372-2f8e48ea33cc/manager/0.log" Nov 24 13:05:52 crc kubenswrapper[4930]: I1124 13:05:52.760903 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-d9wbt_37344a1b-ea4d-4dcf-a803-3811a5626106/kube-rbac-proxy/0.log" Nov 24 13:05:52 crc kubenswrapper[4930]: I1124 13:05:52.935935 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-d9wbt_37344a1b-ea4d-4dcf-a803-3811a5626106/manager/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.009810 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-4ffrf_4b01f462-8bc8-4f01-ac0c-76452c353177/manager/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.050874 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-4ffrf_4b01f462-8bc8-4f01-ac0c-76452c353177/kube-rbac-proxy/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.211130 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-kdw5m_a60dc80f-2382-4901-a79e-1468759d9281/kube-rbac-proxy/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.284783 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-kdw5m_a60dc80f-2382-4901-a79e-1468759d9281/manager/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.501435 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-g2cfx_39e1c56a-84c3-4f33-a16d-77c62d65cd0f/kube-rbac-proxy/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.522235 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-g2cfx_39e1c56a-84c3-4f33-a16d-77c62d65cd0f/manager/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.564578 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-2m7pb_8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8/kube-rbac-proxy/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.622740 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-2m7pb_8bda08b7-e7e9-41cb-b01a-9d85a18a4ce8/manager/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.768821 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-8kwlf_9e55dcae-85ee-412f-aa9b-3fc5a061d595/kube-rbac-proxy/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.803829 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-8kwlf_9e55dcae-85ee-412f-aa9b-3fc5a061d595/manager/0.log" Nov 24 13:05:53 crc kubenswrapper[4930]: I1124 13:05:53.839481 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq_f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9/kube-rbac-proxy/0.log" Nov 24 13:05:54 crc kubenswrapper[4930]: I1124 13:05:54.072357 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-79d88dcd44xc5sq_f1288d3a-ecb8-4e9f-9b0a-c7229d0940b9/manager/0.log" Nov 24 13:05:54 crc kubenswrapper[4930]: I1124 13:05:54.082963 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cb9dc54f8-67b99_bd00a0b4-94c5-4ce5-b162-65c27e70c254/kube-rbac-proxy/0.log" Nov 24 13:05:54 crc kubenswrapper[4930]: I1124 13:05:54.276639 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8486c7f98b-v5s6l_a258ca7d-5a5d-477b-919c-e770ab7fa9cd/kube-rbac-proxy/0.log" Nov 24 13:05:54 crc kubenswrapper[4930]: I1124 13:05:54.493966 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5mhrz_df43ee8c-48c3-4014-a134-a3fddf9e8194/registry-server/0.log" Nov 24 13:05:54 crc kubenswrapper[4930]: I1124 13:05:54.594219 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8486c7f98b-v5s6l_a258ca7d-5a5d-477b-919c-e770ab7fa9cd/operator/0.log" Nov 24 13:05:54 crc kubenswrapper[4930]: I1124 13:05:54.608212 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-g62mm_25cf6a11-4150-4091-a6b8-d7510c5ca5ac/kube-rbac-proxy/0.log" Nov 24 13:05:54 crc kubenswrapper[4930]: I1124 13:05:54.774818 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-g62mm_25cf6a11-4150-4091-a6b8-d7510c5ca5ac/manager/0.log" Nov 24 13:05:54 crc kubenswrapper[4930]: I1124 13:05:54.908053 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-qvfs7_dbb47a0b-1e01-47b7-b57f-20e2e908674e/manager/0.log" Nov 24 13:05:54 crc kubenswrapper[4930]: I1124 13:05:54.967754 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-qvfs7_dbb47a0b-1e01-47b7-b57f-20e2e908674e/kube-rbac-proxy/0.log" Nov 24 13:05:55 crc kubenswrapper[4930]: I1124 13:05:55.043641 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-xt6q6_83d079ef-a30c-458e-a350-c6f6d9a8985f/operator/0.log" Nov 24 13:05:55 crc kubenswrapper[4930]: I1124 13:05:55.237176 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-kzwpc_6de96fac-ce97-4bec-a2af-f50f839454ea/kube-rbac-proxy/0.log" Nov 24 13:05:55 crc kubenswrapper[4930]: I1124 13:05:55.250131 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cb9dc54f8-67b99_bd00a0b4-94c5-4ce5-b162-65c27e70c254/manager/0.log" Nov 24 13:05:55 crc kubenswrapper[4930]: I1124 13:05:55.293354 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-kzwpc_6de96fac-ce97-4bec-a2af-f50f839454ea/manager/0.log" Nov 24 13:05:55 crc kubenswrapper[4930]: I1124 13:05:55.359280 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-27cpb_f7031ec9-a046-4f1f-93e0-a6da41013d68/kube-rbac-proxy/0.log" Nov 24 13:05:55 crc kubenswrapper[4930]: I1124 13:05:55.490425 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-27cpb_f7031ec9-a046-4f1f-93e0-a6da41013d68/manager/0.log" Nov 24 13:05:55 crc kubenswrapper[4930]: I1124 13:05:55.503075 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-f2q9m_6db937f0-a6f1-44e0-87b8-cd4e2d645e24/kube-rbac-proxy/0.log" Nov 24 13:05:55 crc kubenswrapper[4930]: I1124 13:05:55.535256 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-f2q9m_6db937f0-a6f1-44e0-87b8-cd4e2d645e24/manager/0.log" Nov 24 13:05:55 crc kubenswrapper[4930]: I1124 13:05:55.691310 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-2zd5j_21e42885-6ebc-4b29-a2d1-32f64e257e11/kube-rbac-proxy/0.log" Nov 24 13:05:55 crc kubenswrapper[4930]: I1124 13:05:55.720198 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-2zd5j_21e42885-6ebc-4b29-a2d1-32f64e257e11/manager/0.log" Nov 24 13:06:03 crc kubenswrapper[4930]: I1124 13:06:03.084775 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:06:03 crc kubenswrapper[4930]: E1124 13:06:03.085809 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:06:12 crc kubenswrapper[4930]: I1124 13:06:12.385263 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xjpj4_080a5d44-2fa6-4e44-bd77-59047f85aea9/control-plane-machine-set-operator/0.log" Nov 24 13:06:12 crc kubenswrapper[4930]: I1124 13:06:12.501707 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kw8wv_28bc15a8-f8ed-4595-8a4f-e0d9e895c085/kube-rbac-proxy/0.log" Nov 24 13:06:12 crc kubenswrapper[4930]: I1124 13:06:12.555556 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kw8wv_28bc15a8-f8ed-4595-8a4f-e0d9e895c085/machine-api-operator/0.log" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.085263 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:06:18 crc kubenswrapper[4930]: E1124 13:06:18.086086 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.201029 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2nv5f"] Nov 24 13:06:18 crc kubenswrapper[4930]: E1124 13:06:18.201677 4930 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cecb9548-9b8e-4697-9afe-be08c822742e" containerName="container-00" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.201700 4930 state_mem.go:107] "Deleted CPUSet assignment" podUID="cecb9548-9b8e-4697-9afe-be08c822742e" containerName="container-00" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.201957 4930 memory_manager.go:354] "RemoveStaleState removing state" podUID="cecb9548-9b8e-4697-9afe-be08c822742e" containerName="container-00" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.203849 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.234165 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nv5f"] Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.258055 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-utilities\") pod \"redhat-marketplace-2nv5f\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.258192 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-catalog-content\") pod \"redhat-marketplace-2nv5f\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.258238 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5kt7\" (UniqueName: \"kubernetes.io/projected/a6b06103-8348-4192-841a-5cd60f4a52d6-kube-api-access-j5kt7\") pod \"redhat-marketplace-2nv5f\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.359440 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-catalog-content\") pod \"redhat-marketplace-2nv5f\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.359499 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5kt7\" (UniqueName: \"kubernetes.io/projected/a6b06103-8348-4192-841a-5cd60f4a52d6-kube-api-access-j5kt7\") pod \"redhat-marketplace-2nv5f\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.359610 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-utilities\") pod \"redhat-marketplace-2nv5f\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.360043 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-utilities\") pod \"redhat-marketplace-2nv5f\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.360258 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-catalog-content\") pod \"redhat-marketplace-2nv5f\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.381576 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5kt7\" (UniqueName: \"kubernetes.io/projected/a6b06103-8348-4192-841a-5cd60f4a52d6-kube-api-access-j5kt7\") pod \"redhat-marketplace-2nv5f\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:18 crc kubenswrapper[4930]: I1124 13:06:18.542198 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:19 crc kubenswrapper[4930]: I1124 13:06:19.006368 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nv5f"] Nov 24 13:06:19 crc kubenswrapper[4930]: W1124 13:06:19.013648 4930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6b06103_8348_4192_841a_5cd60f4a52d6.slice/crio-8c22caa62ffa7489e7a535719094957518c7cbaec3194e15e79a6293fa672559 WatchSource:0}: Error finding container 8c22caa62ffa7489e7a535719094957518c7cbaec3194e15e79a6293fa672559: Status 404 returned error can't find the container with id 8c22caa62ffa7489e7a535719094957518c7cbaec3194e15e79a6293fa672559 Nov 24 13:06:19 crc kubenswrapper[4930]: I1124 13:06:19.079318 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nv5f" event={"ID":"a6b06103-8348-4192-841a-5cd60f4a52d6","Type":"ContainerStarted","Data":"8c22caa62ffa7489e7a535719094957518c7cbaec3194e15e79a6293fa672559"} Nov 24 13:06:20 crc kubenswrapper[4930]: I1124 13:06:20.088941 4930 generic.go:334] "Generic (PLEG): container finished" podID="a6b06103-8348-4192-841a-5cd60f4a52d6" containerID="f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535" exitCode=0 Nov 24 13:06:20 crc kubenswrapper[4930]: I1124 13:06:20.094781 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nv5f" event={"ID":"a6b06103-8348-4192-841a-5cd60f4a52d6","Type":"ContainerDied","Data":"f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535"} Nov 24 13:06:22 crc kubenswrapper[4930]: I1124 13:06:22.115743 4930 generic.go:334] "Generic (PLEG): container finished" podID="a6b06103-8348-4192-841a-5cd60f4a52d6" containerID="4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0" exitCode=0 Nov 24 13:06:22 crc kubenswrapper[4930]: I1124 13:06:22.115821 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nv5f" event={"ID":"a6b06103-8348-4192-841a-5cd60f4a52d6","Type":"ContainerDied","Data":"4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0"} Nov 24 13:06:23 crc kubenswrapper[4930]: I1124 13:06:23.127044 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nv5f" event={"ID":"a6b06103-8348-4192-841a-5cd60f4a52d6","Type":"ContainerStarted","Data":"ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4"} Nov 24 13:06:23 crc kubenswrapper[4930]: I1124 13:06:23.154512 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2nv5f" podStartSLOduration=2.743083098 podStartE2EDuration="5.154492429s" podCreationTimestamp="2025-11-24 13:06:18 +0000 UTC" firstStartedPulling="2025-11-24 13:06:20.090373036 +0000 UTC m=+4026.704700986" lastFinishedPulling="2025-11-24 13:06:22.501782367 +0000 UTC m=+4029.116110317" observedRunningTime="2025-11-24 13:06:23.147787387 +0000 UTC m=+4029.762115387" watchObservedRunningTime="2025-11-24 13:06:23.154492429 +0000 UTC m=+4029.768820379" Nov 24 13:06:24 crc kubenswrapper[4930]: I1124 13:06:24.568056 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7rggt_475d077a-f4ed-4d11-9cc9-ec7b5dc365fe/cert-manager-controller/0.log" Nov 24 13:06:24 crc kubenswrapper[4930]: I1124 13:06:24.728061 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-6cpnr_cbbf065d-9533-4da3-80b7-0f20e160caf4/cert-manager-cainjector/0.log" Nov 24 13:06:24 crc kubenswrapper[4930]: I1124 13:06:24.762686 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-54lcm_baaa4d3f-5068-4824-a874-eb5e484bcf5b/cert-manager-webhook/0.log" Nov 24 13:06:28 crc kubenswrapper[4930]: I1124 13:06:28.542785 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:28 crc kubenswrapper[4930]: I1124 13:06:28.543364 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:28 crc kubenswrapper[4930]: I1124 13:06:28.587362 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:29 crc kubenswrapper[4930]: I1124 13:06:29.232245 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:29 crc kubenswrapper[4930]: I1124 13:06:29.280485 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nv5f"] Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.085223 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:06:31 crc kubenswrapper[4930]: E1124 13:06:31.085774 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.196358 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2nv5f" podUID="a6b06103-8348-4192-841a-5cd60f4a52d6" containerName="registry-server" containerID="cri-o://ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4" gracePeriod=2 Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.227853 4930 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ttnvv"] Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.230201 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.245907 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttnvv"] Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.329665 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-utilities\") pod \"community-operators-ttnvv\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.330067 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-catalog-content\") pod \"community-operators-ttnvv\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.330097 4930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf6vm\" (UniqueName: \"kubernetes.io/projected/51da6a9c-0bb6-483a-9558-164acd5be969-kube-api-access-zf6vm\") pod \"community-operators-ttnvv\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.434054 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-catalog-content\") pod \"community-operators-ttnvv\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.434118 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf6vm\" (UniqueName: \"kubernetes.io/projected/51da6a9c-0bb6-483a-9558-164acd5be969-kube-api-access-zf6vm\") pod \"community-operators-ttnvv\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.434179 4930 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-utilities\") pod \"community-operators-ttnvv\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.434515 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-catalog-content\") pod \"community-operators-ttnvv\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.434650 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-utilities\") pod \"community-operators-ttnvv\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.453406 4930 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf6vm\" (UniqueName: \"kubernetes.io/projected/51da6a9c-0bb6-483a-9558-164acd5be969-kube-api-access-zf6vm\") pod \"community-operators-ttnvv\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.596052 4930 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.801295 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.942188 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5kt7\" (UniqueName: \"kubernetes.io/projected/a6b06103-8348-4192-841a-5cd60f4a52d6-kube-api-access-j5kt7\") pod \"a6b06103-8348-4192-841a-5cd60f4a52d6\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.942271 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-utilities\") pod \"a6b06103-8348-4192-841a-5cd60f4a52d6\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.942334 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-catalog-content\") pod \"a6b06103-8348-4192-841a-5cd60f4a52d6\" (UID: \"a6b06103-8348-4192-841a-5cd60f4a52d6\") " Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.947520 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-utilities" (OuterVolumeSpecName: "utilities") pod "a6b06103-8348-4192-841a-5cd60f4a52d6" (UID: "a6b06103-8348-4192-841a-5cd60f4a52d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.961675 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6b06103-8348-4192-841a-5cd60f4a52d6" (UID: "a6b06103-8348-4192-841a-5cd60f4a52d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:06:31 crc kubenswrapper[4930]: I1124 13:06:31.963422 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b06103-8348-4192-841a-5cd60f4a52d6-kube-api-access-j5kt7" (OuterVolumeSpecName: "kube-api-access-j5kt7") pod "a6b06103-8348-4192-841a-5cd60f4a52d6" (UID: "a6b06103-8348-4192-841a-5cd60f4a52d6"). InnerVolumeSpecName "kube-api-access-j5kt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.044169 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5kt7\" (UniqueName: \"kubernetes.io/projected/a6b06103-8348-4192-841a-5cd60f4a52d6-kube-api-access-j5kt7\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.044210 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.044223 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b06103-8348-4192-841a-5cd60f4a52d6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.172608 4930 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttnvv"] Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.207801 4930 generic.go:334] "Generic (PLEG): container finished" podID="a6b06103-8348-4192-841a-5cd60f4a52d6" containerID="ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4" exitCode=0 Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.207924 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nv5f" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.207925 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nv5f" event={"ID":"a6b06103-8348-4192-841a-5cd60f4a52d6","Type":"ContainerDied","Data":"ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4"} Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.208284 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nv5f" event={"ID":"a6b06103-8348-4192-841a-5cd60f4a52d6","Type":"ContainerDied","Data":"8c22caa62ffa7489e7a535719094957518c7cbaec3194e15e79a6293fa672559"} Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.208313 4930 scope.go:117] "RemoveContainer" containerID="ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.211027 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttnvv" event={"ID":"51da6a9c-0bb6-483a-9558-164acd5be969","Type":"ContainerStarted","Data":"73680aee3deba093ced88973573835f84e562cf088c2d1d370f8dd396d279554"} Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.234478 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nv5f"] Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.242949 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nv5f"] Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.243326 4930 scope.go:117] "RemoveContainer" containerID="4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.279820 4930 scope.go:117] "RemoveContainer" containerID="f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.320838 4930 scope.go:117] "RemoveContainer" containerID="ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4" Nov 24 13:06:32 crc kubenswrapper[4930]: E1124 13:06:32.321236 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4\": container with ID starting with ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4 not found: ID does not exist" containerID="ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.321271 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4"} err="failed to get container status \"ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4\": rpc error: code = NotFound desc = could not find container \"ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4\": container with ID starting with ee72753003efdd79338727041c29317cfb98b7a8eb0d2e8b8244945393994ee4 not found: ID does not exist" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.321310 4930 scope.go:117] "RemoveContainer" containerID="4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0" Nov 24 13:06:32 crc kubenswrapper[4930]: E1124 13:06:32.321545 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0\": container with ID starting with 4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0 not found: ID does not exist" containerID="4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.321575 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0"} err="failed to get container status \"4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0\": rpc error: code = NotFound desc = could not find container \"4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0\": container with ID starting with 4ff7503f91c86a33a9dfa7ef6efbae73758720484d8a4d27c1919bdb65b2b8c0 not found: ID does not exist" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.321589 4930 scope.go:117] "RemoveContainer" containerID="f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535" Nov 24 13:06:32 crc kubenswrapper[4930]: E1124 13:06:32.321838 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535\": container with ID starting with f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535 not found: ID does not exist" containerID="f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535" Nov 24 13:06:32 crc kubenswrapper[4930]: I1124 13:06:32.321864 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535"} err="failed to get container status \"f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535\": rpc error: code = NotFound desc = could not find container \"f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535\": container with ID starting with f1d660da838ad4bcd17fdf610b7f95598ceae10646a360e4875794a4517df535 not found: ID does not exist" Nov 24 13:06:33 crc kubenswrapper[4930]: I1124 13:06:33.222185 4930 generic.go:334] "Generic (PLEG): container finished" podID="51da6a9c-0bb6-483a-9558-164acd5be969" containerID="c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f" exitCode=0 Nov 24 13:06:33 crc kubenswrapper[4930]: I1124 13:06:33.222372 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttnvv" event={"ID":"51da6a9c-0bb6-483a-9558-164acd5be969","Type":"ContainerDied","Data":"c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f"} Nov 24 13:06:34 crc kubenswrapper[4930]: I1124 13:06:34.101404 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b06103-8348-4192-841a-5cd60f4a52d6" path="/var/lib/kubelet/pods/a6b06103-8348-4192-841a-5cd60f4a52d6/volumes" Nov 24 13:06:35 crc kubenswrapper[4930]: I1124 13:06:35.240833 4930 generic.go:334] "Generic (PLEG): container finished" podID="51da6a9c-0bb6-483a-9558-164acd5be969" containerID="702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7" exitCode=0 Nov 24 13:06:35 crc kubenswrapper[4930]: I1124 13:06:35.240918 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttnvv" event={"ID":"51da6a9c-0bb6-483a-9558-164acd5be969","Type":"ContainerDied","Data":"702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7"} Nov 24 13:06:36 crc kubenswrapper[4930]: I1124 13:06:36.262601 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttnvv" event={"ID":"51da6a9c-0bb6-483a-9558-164acd5be969","Type":"ContainerStarted","Data":"424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90"} Nov 24 13:06:36 crc kubenswrapper[4930]: I1124 13:06:36.290631 4930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ttnvv" podStartSLOduration=2.8890801 podStartE2EDuration="5.290612829s" podCreationTimestamp="2025-11-24 13:06:31 +0000 UTC" firstStartedPulling="2025-11-24 13:06:33.224366275 +0000 UTC m=+4039.838694235" lastFinishedPulling="2025-11-24 13:06:35.625899014 +0000 UTC m=+4042.240226964" observedRunningTime="2025-11-24 13:06:36.281981632 +0000 UTC m=+4042.896309592" watchObservedRunningTime="2025-11-24 13:06:36.290612829 +0000 UTC m=+4042.904940779" Nov 24 13:06:36 crc kubenswrapper[4930]: I1124 13:06:36.573198 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-2mgrm_76de68fb-d44e-4e24-8843-18718d6763df/nmstate-console-plugin/0.log" Nov 24 13:06:36 crc kubenswrapper[4930]: I1124 13:06:36.773512 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-zgbjb_06f7eb12-09fc-4d53-9ff2-bc97ea26e4c8/nmstate-handler/0.log" Nov 24 13:06:36 crc kubenswrapper[4930]: I1124 13:06:36.829788 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4tlrm_aa0b5808-c9b9-42b0-b585-1677b72ed1f3/nmstate-metrics/0.log" Nov 24 13:06:36 crc kubenswrapper[4930]: I1124 13:06:36.834773 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4tlrm_aa0b5808-c9b9-42b0-b585-1677b72ed1f3/kube-rbac-proxy/0.log" Nov 24 13:06:37 crc kubenswrapper[4930]: I1124 13:06:37.062367 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-569q4_fd169461-4da3-47da-b2b5-d7c796f9eec9/nmstate-webhook/0.log" Nov 24 13:06:37 crc kubenswrapper[4930]: I1124 13:06:37.071109 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-vkg6v_326bae6a-98bd-4c7a-adfe-68f5680ac766/nmstate-operator/0.log" Nov 24 13:06:41 crc kubenswrapper[4930]: I1124 13:06:41.597215 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:41 crc kubenswrapper[4930]: I1124 13:06:41.597822 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:41 crc kubenswrapper[4930]: I1124 13:06:41.644764 4930 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:42 crc kubenswrapper[4930]: I1124 13:06:42.529901 4930 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:42 crc kubenswrapper[4930]: I1124 13:06:42.576645 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttnvv"] Nov 24 13:06:44 crc kubenswrapper[4930]: I1124 13:06:44.330928 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ttnvv" podUID="51da6a9c-0bb6-483a-9558-164acd5be969" containerName="registry-server" containerID="cri-o://424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90" gracePeriod=2 Nov 24 13:06:44 crc kubenswrapper[4930]: I1124 13:06:44.774824 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:44 crc kubenswrapper[4930]: I1124 13:06:44.901863 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-utilities\") pod \"51da6a9c-0bb6-483a-9558-164acd5be969\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " Nov 24 13:06:44 crc kubenswrapper[4930]: I1124 13:06:44.902223 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf6vm\" (UniqueName: \"kubernetes.io/projected/51da6a9c-0bb6-483a-9558-164acd5be969-kube-api-access-zf6vm\") pod \"51da6a9c-0bb6-483a-9558-164acd5be969\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " Nov 24 13:06:44 crc kubenswrapper[4930]: I1124 13:06:44.902268 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-catalog-content\") pod \"51da6a9c-0bb6-483a-9558-164acd5be969\" (UID: \"51da6a9c-0bb6-483a-9558-164acd5be969\") " Nov 24 13:06:44 crc kubenswrapper[4930]: I1124 13:06:44.902595 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-utilities" (OuterVolumeSpecName: "utilities") pod "51da6a9c-0bb6-483a-9558-164acd5be969" (UID: "51da6a9c-0bb6-483a-9558-164acd5be969"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:06:44 crc kubenswrapper[4930]: I1124 13:06:44.903094 4930 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:44 crc kubenswrapper[4930]: I1124 13:06:44.915639 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51da6a9c-0bb6-483a-9558-164acd5be969-kube-api-access-zf6vm" (OuterVolumeSpecName: "kube-api-access-zf6vm") pod "51da6a9c-0bb6-483a-9558-164acd5be969" (UID: "51da6a9c-0bb6-483a-9558-164acd5be969"). InnerVolumeSpecName "kube-api-access-zf6vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:06:44 crc kubenswrapper[4930]: I1124 13:06:44.965232 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51da6a9c-0bb6-483a-9558-164acd5be969" (UID: "51da6a9c-0bb6-483a-9558-164acd5be969"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.005708 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf6vm\" (UniqueName: \"kubernetes.io/projected/51da6a9c-0bb6-483a-9558-164acd5be969-kube-api-access-zf6vm\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.005745 4930 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51da6a9c-0bb6-483a-9558-164acd5be969-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.342146 4930 generic.go:334] "Generic (PLEG): container finished" podID="51da6a9c-0bb6-483a-9558-164acd5be969" containerID="424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90" exitCode=0 Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.342193 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttnvv" event={"ID":"51da6a9c-0bb6-483a-9558-164acd5be969","Type":"ContainerDied","Data":"424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90"} Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.342226 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttnvv" event={"ID":"51da6a9c-0bb6-483a-9558-164acd5be969","Type":"ContainerDied","Data":"73680aee3deba093ced88973573835f84e562cf088c2d1d370f8dd396d279554"} Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.342249 4930 scope.go:117] "RemoveContainer" containerID="424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.342403 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttnvv" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.371849 4930 scope.go:117] "RemoveContainer" containerID="702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.392725 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttnvv"] Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.395408 4930 scope.go:117] "RemoveContainer" containerID="c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.402491 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ttnvv"] Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.442316 4930 scope.go:117] "RemoveContainer" containerID="424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90" Nov 24 13:06:45 crc kubenswrapper[4930]: E1124 13:06:45.442899 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90\": container with ID starting with 424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90 not found: ID does not exist" containerID="424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.442972 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90"} err="failed to get container status \"424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90\": rpc error: code = NotFound desc = could not find container \"424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90\": container with ID starting with 424c9e930b73332489e0ce7d7bdb6bdaade78f70b54671a8428d2a640feb6a90 not found: ID does not exist" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.443009 4930 scope.go:117] "RemoveContainer" containerID="702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7" Nov 24 13:06:45 crc kubenswrapper[4930]: E1124 13:06:45.443431 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7\": container with ID starting with 702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7 not found: ID does not exist" containerID="702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.443477 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7"} err="failed to get container status \"702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7\": rpc error: code = NotFound desc = could not find container \"702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7\": container with ID starting with 702b3d41b9339d386a64b7bb27b36113552a73b253c2c96ebb2b053b9cab64e7 not found: ID does not exist" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.443505 4930 scope.go:117] "RemoveContainer" containerID="c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f" Nov 24 13:06:45 crc kubenswrapper[4930]: E1124 13:06:45.443795 4930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f\": container with ID starting with c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f not found: ID does not exist" containerID="c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f" Nov 24 13:06:45 crc kubenswrapper[4930]: I1124 13:06:45.443828 4930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f"} err="failed to get container status \"c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f\": rpc error: code = NotFound desc = could not find container \"c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f\": container with ID starting with c3114b8bcfa61eb71fb757360be5111f221be8f3125118cda56ebf6f6c0bb67f not found: ID does not exist" Nov 24 13:06:46 crc kubenswrapper[4930]: I1124 13:06:46.084555 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:06:46 crc kubenswrapper[4930]: E1124 13:06:46.085130 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:06:46 crc kubenswrapper[4930]: I1124 13:06:46.095092 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51da6a9c-0bb6-483a-9558-164acd5be969" path="/var/lib/kubelet/pods/51da6a9c-0bb6-483a-9558-164acd5be969/volumes" Nov 24 13:06:50 crc kubenswrapper[4930]: I1124 13:06:50.792257 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-twjmq_86addadb-2b19-4ba8-b365-0d5d5dd326c5/kube-rbac-proxy/0.log" Nov 24 13:06:50 crc kubenswrapper[4930]: I1124 13:06:50.807149 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-twjmq_86addadb-2b19-4ba8-b365-0d5d5dd326c5/controller/0.log" Nov 24 13:06:51 crc kubenswrapper[4930]: I1124 13:06:51.533471 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-tdgdd_086e1816-851c-4997-b8f2-04563ff50e05/frr-k8s-webhook-server/0.log" Nov 24 13:06:51 crc kubenswrapper[4930]: I1124 13:06:51.568938 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-frr-files/0.log" Nov 24 13:06:51 crc kubenswrapper[4930]: I1124 13:06:51.724260 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-frr-files/0.log" Nov 24 13:06:51 crc kubenswrapper[4930]: I1124 13:06:51.754186 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-metrics/0.log" Nov 24 13:06:51 crc kubenswrapper[4930]: I1124 13:06:51.775591 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-reloader/0.log" Nov 24 13:06:51 crc kubenswrapper[4930]: I1124 13:06:51.802202 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-reloader/0.log" Nov 24 13:06:51 crc kubenswrapper[4930]: I1124 13:06:51.987563 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-metrics/0.log" Nov 24 13:06:51 crc kubenswrapper[4930]: I1124 13:06:51.995918 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-reloader/0.log" Nov 24 13:06:52 crc kubenswrapper[4930]: I1124 13:06:52.012936 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-metrics/0.log" Nov 24 13:06:52 crc kubenswrapper[4930]: I1124 13:06:52.016978 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-frr-files/0.log" Nov 24 13:06:52 crc kubenswrapper[4930]: I1124 13:06:52.187033 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-metrics/0.log" Nov 24 13:06:52 crc kubenswrapper[4930]: I1124 13:06:52.192269 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-reloader/0.log" Nov 24 13:06:52 crc kubenswrapper[4930]: I1124 13:06:52.193550 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/cp-frr-files/0.log" Nov 24 13:06:52 crc kubenswrapper[4930]: I1124 13:06:52.235745 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/controller/0.log" Nov 24 13:06:52 crc kubenswrapper[4930]: I1124 13:06:52.373406 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/frr-metrics/0.log" Nov 24 13:06:52 crc kubenswrapper[4930]: I1124 13:06:52.418671 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/kube-rbac-proxy/0.log" Nov 24 13:06:52 crc kubenswrapper[4930]: I1124 13:06:52.457721 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/kube-rbac-proxy-frr/0.log" Nov 24 13:06:52 crc kubenswrapper[4930]: I1124 13:06:52.972248 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/reloader/0.log" Nov 24 13:06:53 crc kubenswrapper[4930]: I1124 13:06:53.025678 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6d8988b99d-fjfg4_37f079f2-d796-4fce-8fdb-030a0a663e1b/manager/0.log" Nov 24 13:06:53 crc kubenswrapper[4930]: I1124 13:06:53.254945 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-677786b954-pxf8r_5339f9f0-99ee-4ff8-90cc-8ab86611abc6/webhook-server/0.log" Nov 24 13:06:53 crc kubenswrapper[4930]: I1124 13:06:53.434336 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-t7cvk_cdda2566-3ca8-492b-a37f-18a8beccb6a6/kube-rbac-proxy/0.log" Nov 24 13:06:53 crc kubenswrapper[4930]: I1124 13:06:53.617884 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpbvr_2b7c02aa-da2a-43db-9985-96ae84d5e3df/frr/0.log" Nov 24 13:06:53 crc kubenswrapper[4930]: I1124 13:06:53.856792 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-t7cvk_cdda2566-3ca8-492b-a37f-18a8beccb6a6/speaker/0.log" Nov 24 13:07:00 crc kubenswrapper[4930]: I1124 13:07:00.084778 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:07:00 crc kubenswrapper[4930]: E1124 13:07:00.085422 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:07:05 crc kubenswrapper[4930]: I1124 13:07:05.227201 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/util/0.log" Nov 24 13:07:05 crc kubenswrapper[4930]: I1124 13:07:05.456356 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/util/0.log" Nov 24 13:07:05 crc kubenswrapper[4930]: I1124 13:07:05.456881 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/pull/0.log" Nov 24 13:07:05 crc kubenswrapper[4930]: I1124 13:07:05.461439 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/pull/0.log" Nov 24 13:07:05 crc kubenswrapper[4930]: I1124 13:07:05.656503 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/util/0.log" Nov 24 13:07:05 crc kubenswrapper[4930]: I1124 13:07:05.667848 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/extract/0.log" Nov 24 13:07:05 crc kubenswrapper[4930]: I1124 13:07:05.712633 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e998w4_2a6820ef-bc97-4869-9957-a94fbefdb9d9/pull/0.log" Nov 24 13:07:05 crc kubenswrapper[4930]: I1124 13:07:05.867263 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-utilities/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.003960 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-utilities/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.026896 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-content/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.043123 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-content/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.259379 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-utilities/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.336895 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/extract-content/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.441903 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-utilities/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.730470 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-content/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.733094 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-utilities/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.779394 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-content/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.939033 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rnnzw_fb9cf3ee-0338-4245-a13e-edf25c6cc87c/registry-server/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.956122 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-content/0.log" Nov 24 13:07:06 crc kubenswrapper[4930]: I1124 13:07:06.970950 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/extract-utilities/0.log" Nov 24 13:07:07 crc kubenswrapper[4930]: I1124 13:07:07.187758 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/util/0.log" Nov 24 13:07:07 crc kubenswrapper[4930]: I1124 13:07:07.256951 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dck4g_15734d21-7620-42df-bc4a-b9fd5db7162a/registry-server/0.log" Nov 24 13:07:07 crc kubenswrapper[4930]: I1124 13:07:07.509462 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/util/0.log" Nov 24 13:07:07 crc kubenswrapper[4930]: I1124 13:07:07.514503 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/pull/0.log" Nov 24 13:07:07 crc kubenswrapper[4930]: I1124 13:07:07.547385 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/pull/0.log" Nov 24 13:07:07 crc kubenswrapper[4930]: I1124 13:07:07.663027 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/util/0.log" Nov 24 13:07:07 crc kubenswrapper[4930]: I1124 13:07:07.694914 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/pull/0.log" Nov 24 13:07:07 crc kubenswrapper[4930]: I1124 13:07:07.695924 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c67m26q_30194744-e459-4f4e-8f0c-5205d76aa5e0/extract/0.log" Nov 24 13:07:07 crc kubenswrapper[4930]: I1124 13:07:07.891784 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vn8d4_6adfccee-6f09-45b8-b8b9-4cd6fe524680/marketplace-operator/0.log" Nov 24 13:07:07 crc kubenswrapper[4930]: I1124 13:07:07.912191 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-utilities/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.050166 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-utilities/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.099320 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-content/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.139632 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-content/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.298121 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-content/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.315848 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/extract-utilities/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.508412 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wvfmp_ab6112e7-2923-4b99-973b-bfc18820f99a/registry-server/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.521198 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9jt5w_29aca205-9637-47c8-9ab4-a5e1068f2c79/extract-utilities/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.727205 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9jt5w_29aca205-9637-47c8-9ab4-a5e1068f2c79/extract-content/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.733260 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9jt5w_29aca205-9637-47c8-9ab4-a5e1068f2c79/extract-content/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.743114 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9jt5w_29aca205-9637-47c8-9ab4-a5e1068f2c79/extract-utilities/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.924587 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9jt5w_29aca205-9637-47c8-9ab4-a5e1068f2c79/extract-content/0.log" Nov 24 13:07:08 crc kubenswrapper[4930]: I1124 13:07:08.961734 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9jt5w_29aca205-9637-47c8-9ab4-a5e1068f2c79/extract-utilities/0.log" Nov 24 13:07:09 crc kubenswrapper[4930]: I1124 13:07:09.077316 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9jt5w_29aca205-9637-47c8-9ab4-a5e1068f2c79/registry-server/0.log" Nov 24 13:07:13 crc kubenswrapper[4930]: I1124 13:07:13.085199 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:07:13 crc kubenswrapper[4930]: E1124 13:07:13.085911 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:07:25 crc kubenswrapper[4930]: I1124 13:07:25.085737 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:07:25 crc kubenswrapper[4930]: E1124 13:07:25.087434 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:07:37 crc kubenswrapper[4930]: I1124 13:07:37.084605 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:07:37 crc kubenswrapper[4930]: E1124 13:07:37.085448 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:07:48 crc kubenswrapper[4930]: I1124 13:07:48.085254 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:07:48 crc kubenswrapper[4930]: E1124 13:07:48.085973 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:08:01 crc kubenswrapper[4930]: I1124 13:08:01.084377 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:08:01 crc kubenswrapper[4930]: E1124 13:08:01.106031 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:08:16 crc kubenswrapper[4930]: I1124 13:08:16.084490 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:08:16 crc kubenswrapper[4930]: E1124 13:08:16.085335 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:08:27 crc kubenswrapper[4930]: I1124 13:08:27.084181 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:08:27 crc kubenswrapper[4930]: E1124 13:08:27.084902 4930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kjhcw_openshift-machine-config-operator(8835064f-65c7-48cb-8b7d-330e5cce840a)\"" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" Nov 24 13:08:38 crc kubenswrapper[4930]: I1124 13:08:38.085014 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80" Nov 24 13:08:38 crc kubenswrapper[4930]: I1124 13:08:38.424175 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"b34690a798660a58f29a031fe78cd04f20ea857ba9c6fbae0ea11c8b3c6ade45"} Nov 24 13:08:49 crc kubenswrapper[4930]: I1124 13:08:49.521558 4930 generic.go:334] "Generic (PLEG): container finished" podID="b32af007-868e-41e2-bb7f-3a6fa74cb42e" containerID="a09bbba8b86383a8fc1ecc2c23c815f30ad8436f9a00aba2229fbf56fe3171ad" exitCode=0 Nov 24 13:08:49 crc kubenswrapper[4930]: I1124 13:08:49.521658 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xfdtg/must-gather-mq7pz" event={"ID":"b32af007-868e-41e2-bb7f-3a6fa74cb42e","Type":"ContainerDied","Data":"a09bbba8b86383a8fc1ecc2c23c815f30ad8436f9a00aba2229fbf56fe3171ad"} Nov 24 13:08:49 crc kubenswrapper[4930]: I1124 13:08:49.522874 4930 scope.go:117] "RemoveContainer" containerID="a09bbba8b86383a8fc1ecc2c23c815f30ad8436f9a00aba2229fbf56fe3171ad" Nov 24 13:08:50 crc kubenswrapper[4930]: I1124 13:08:50.289256 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xfdtg_must-gather-mq7pz_b32af007-868e-41e2-bb7f-3a6fa74cb42e/gather/0.log" Nov 24 13:08:59 crc kubenswrapper[4930]: I1124 13:08:59.926112 4930 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xfdtg/must-gather-mq7pz"] Nov 24 13:08:59 crc kubenswrapper[4930]: I1124 13:08:59.927489 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xfdtg/must-gather-mq7pz" podUID="b32af007-868e-41e2-bb7f-3a6fa74cb42e" containerName="copy" containerID="cri-o://0071c73a80c7c1983f18af63ad710624e4d04d56a561bb9e63d1d832f6d7a114" gracePeriod=2 Nov 24 13:08:59 crc kubenswrapper[4930]: I1124 13:08:59.936261 4930 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xfdtg/must-gather-mq7pz"] Nov 24 13:09:00 crc kubenswrapper[4930]: I1124 13:09:00.663042 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xfdtg_must-gather-mq7pz_b32af007-868e-41e2-bb7f-3a6fa74cb42e/copy/0.log" Nov 24 13:09:00 crc kubenswrapper[4930]: I1124 13:09:00.663943 4930 generic.go:334] "Generic (PLEG): container finished" podID="b32af007-868e-41e2-bb7f-3a6fa74cb42e" containerID="0071c73a80c7c1983f18af63ad710624e4d04d56a561bb9e63d1d832f6d7a114" exitCode=143 Nov 24 13:09:00 crc kubenswrapper[4930]: I1124 13:09:00.950831 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xfdtg_must-gather-mq7pz_b32af007-868e-41e2-bb7f-3a6fa74cb42e/copy/0.log" Nov 24 13:09:00 crc kubenswrapper[4930]: I1124 13:09:00.951192 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/must-gather-mq7pz" Nov 24 13:09:01 crc kubenswrapper[4930]: I1124 13:09:01.023642 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b32af007-868e-41e2-bb7f-3a6fa74cb42e-must-gather-output\") pod \"b32af007-868e-41e2-bb7f-3a6fa74cb42e\" (UID: \"b32af007-868e-41e2-bb7f-3a6fa74cb42e\") " Nov 24 13:09:01 crc kubenswrapper[4930]: I1124 13:09:01.023871 4930 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccg6\" (UniqueName: \"kubernetes.io/projected/b32af007-868e-41e2-bb7f-3a6fa74cb42e-kube-api-access-6ccg6\") pod \"b32af007-868e-41e2-bb7f-3a6fa74cb42e\" (UID: \"b32af007-868e-41e2-bb7f-3a6fa74cb42e\") " Nov 24 13:09:01 crc kubenswrapper[4930]: I1124 13:09:01.029304 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b32af007-868e-41e2-bb7f-3a6fa74cb42e-kube-api-access-6ccg6" (OuterVolumeSpecName: "kube-api-access-6ccg6") pod "b32af007-868e-41e2-bb7f-3a6fa74cb42e" (UID: "b32af007-868e-41e2-bb7f-3a6fa74cb42e"). InnerVolumeSpecName "kube-api-access-6ccg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:09:01 crc kubenswrapper[4930]: I1124 13:09:01.129347 4930 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccg6\" (UniqueName: \"kubernetes.io/projected/b32af007-868e-41e2-bb7f-3a6fa74cb42e-kube-api-access-6ccg6\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:01 crc kubenswrapper[4930]: I1124 13:09:01.159187 4930 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b32af007-868e-41e2-bb7f-3a6fa74cb42e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b32af007-868e-41e2-bb7f-3a6fa74cb42e" (UID: "b32af007-868e-41e2-bb7f-3a6fa74cb42e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:09:01 crc kubenswrapper[4930]: I1124 13:09:01.231246 4930 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b32af007-868e-41e2-bb7f-3a6fa74cb42e-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:01 crc kubenswrapper[4930]: I1124 13:09:01.674916 4930 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xfdtg_must-gather-mq7pz_b32af007-868e-41e2-bb7f-3a6fa74cb42e/copy/0.log" Nov 24 13:09:01 crc kubenswrapper[4930]: I1124 13:09:01.675588 4930 scope.go:117] "RemoveContainer" containerID="0071c73a80c7c1983f18af63ad710624e4d04d56a561bb9e63d1d832f6d7a114" Nov 24 13:09:01 crc kubenswrapper[4930]: I1124 13:09:01.675667 4930 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xfdtg/must-gather-mq7pz" Nov 24 13:09:01 crc kubenswrapper[4930]: I1124 13:09:01.695389 4930 scope.go:117] "RemoveContainer" containerID="a09bbba8b86383a8fc1ecc2c23c815f30ad8436f9a00aba2229fbf56fe3171ad" Nov 24 13:09:02 crc kubenswrapper[4930]: I1124 13:09:02.096926 4930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b32af007-868e-41e2-bb7f-3a6fa74cb42e" path="/var/lib/kubelet/pods/b32af007-868e-41e2-bb7f-3a6fa74cb42e/volumes" Nov 24 13:09:25 crc kubenswrapper[4930]: I1124 13:09:25.290959 4930 scope.go:117] "RemoveContainer" containerID="fb0b3ca4692135225fe575bb09255dd59b2535ba3b8e5c747079775cab9ef24f" Nov 24 13:09:25 crc kubenswrapper[4930]: I1124 13:09:25.309345 4930 scope.go:117] "RemoveContainer" containerID="a153b19e3f4135334c6b38f83341860bcb01074f87aeea978390574940bef051" Nov 24 13:09:25 crc kubenswrapper[4930]: I1124 13:09:25.335302 4930 scope.go:117] "RemoveContainer" containerID="4c6dd45bd4130c482b4fb3d0624676a0d56e764529205c59ff9f6c208b694a0c" Nov 24 13:10:25 crc kubenswrapper[4930]: I1124 13:10:25.448553 4930 scope.go:117] "RemoveContainer" containerID="f5a266edd9ead7725bdb596dd9aff4e6f1a884896ce6b1e0bd119149c85e8fb0" Nov 24 13:11:01 crc kubenswrapper[4930]: I1124 13:11:01.808918 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:11:01 crc kubenswrapper[4930]: I1124 13:11:01.809457 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:11:31 crc kubenswrapper[4930]: I1124 13:11:31.809606 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:11:31 crc kubenswrapper[4930]: I1124 13:11:31.810101 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:12:01 crc kubenswrapper[4930]: I1124 13:12:01.808849 4930 patch_prober.go:28] interesting pod/machine-config-daemon-kjhcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:12:01 crc kubenswrapper[4930]: I1124 13:12:01.809509 4930 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:12:01 crc kubenswrapper[4930]: I1124 13:12:01.809654 4930 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" Nov 24 13:12:01 crc kubenswrapper[4930]: I1124 13:12:01.810488 4930 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b34690a798660a58f29a031fe78cd04f20ea857ba9c6fbae0ea11c8b3c6ade45"} pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:12:01 crc kubenswrapper[4930]: I1124 13:12:01.810561 4930 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" podUID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerName="machine-config-daemon" containerID="cri-o://b34690a798660a58f29a031fe78cd04f20ea857ba9c6fbae0ea11c8b3c6ade45" gracePeriod=600 Nov 24 13:12:02 crc kubenswrapper[4930]: I1124 13:12:02.455670 4930 generic.go:334] "Generic (PLEG): container finished" podID="8835064f-65c7-48cb-8b7d-330e5cce840a" containerID="b34690a798660a58f29a031fe78cd04f20ea857ba9c6fbae0ea11c8b3c6ade45" exitCode=0 Nov 24 13:12:02 crc kubenswrapper[4930]: I1124 13:12:02.455735 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerDied","Data":"b34690a798660a58f29a031fe78cd04f20ea857ba9c6fbae0ea11c8b3c6ade45"} Nov 24 13:12:02 crc kubenswrapper[4930]: I1124 13:12:02.455999 4930 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kjhcw" event={"ID":"8835064f-65c7-48cb-8b7d-330e5cce840a","Type":"ContainerStarted","Data":"95c691f3ca0dbaba950e5f7fc5199739d7098fb5ae909d72c710ecd80131a2ba"} Nov 24 13:12:02 crc kubenswrapper[4930]: I1124 13:12:02.456027 4930 scope.go:117] "RemoveContainer" containerID="1e4c8f6ecd0e6d3175951dedf2a20a53401309fd729770ca9a82203989faae80"